Trust starts with keeping
engineering data under your control.
Metraly is being built as a self-hosted Engineering Intelligence platform for teams that need visibility without making another SaaS the default boundary for repository, CI/CD, project and team signals.
Trust principles
Self-hosted first
Design principleMetraly is designed around customer-controlled deployment, so engineering-system data can stay inside your environment instead of being routed through another SaaS by default.
Synthetic demo by default
Live website behaviorThe public demo uses synthetic engineering data only. No login, no credentials, no real company data.
No hidden telemetry policy
Policy definedOutbound product telemetry, if introduced, must be opt-in/default-off, admin-visible, documented, and forbidden from collecting sensitive engineering data.
AI claims are benchmark-gated
Benchmark pendingAI is a product direction, not a blanket claim. Public AI quality and safety claims require benchmark evidence for correctness, grounding, usefulness, privacy leakage, and prompt-injection resistance.
Plugins require review gates
Policy definedA plugin marketplace is a trust product, not just a distribution page. Marketplace claims require manifests, permissions, signing, review, update controls, and revocation.
Claims stay status-labeled
GovernanceMetraly uses public status labels so designed, planned, in-progress, and implemented capabilities are not mixed together.
What stays under your control
Engineering data can expose architecture, customers, roadmap, incidents and security work. Metraly is designed so sensitive engineering signals can be analyzed in customer-controlled environments rather than sent to a SaaS analytics backend by default.
- Repository and pull request metadata
- Commit and review patterns
- Issue and project signals
- Build and deployment events
- Incident and recovery data
- Team and workflow metrics
AI should explain engineering data without becoming a data leak.
Metraly is designed around privacy-first AI patterns such as local models, bring-your-own providers and controlled data exposure. AI features remain status-labeled and benchmark-gated until implementation and evaluation results exist.
- Designed Dual-LLM architecture
- Synthetic examples first
- Prompt-injection tests planned in the AI benchmark
- Sensitive-data leakage checks required before public AI safety claims
- Local, BYO and external model modes treated as different trust models
Plugin trust needs more than a marketplace page.
Metraly has a designed plugin architecture and a plugin review policy for future marketplace trust. A public plugin marketplace should not be claimed until manifest validation, permission review, package signing, update controls, revocation and install flows are implemented.
- Plugin manifest requirements defined
- Permission model defined
- Signing and package integrity policy defined
- Revocation policy defined
- Marketplace launch gates documented
No hidden telemetry is a product requirement, not a slogan.
Metraly documentation defines a no-hidden-telemetry policy. Outbound product telemetry, if introduced, must be opt-in/default-off, admin-visible, previewable and limited to minimal aggregated diagnostics.
Product telemetry must not collect:
- source code, repository names or organization names;
- commit messages, PR or issue titles;
- personal data, secrets, tokens or credentials;
- raw LLM prompts or raw customer logs.
Compliance wording
Metraly is designed for privacy-conscious and regulated teams, and the project has early trust artefacts: threat model, telemetry policy, plugin review policy and claim policy.