Trust is the scarcest resource in regulated industries. It takes years to build and seconds to destroy. Lawyers, bankers, compliance officers, and medical professionals have spent their careers developing judgment, maintaining confidentiality, and protecting their clients’ interests. Asking them to trust an AI system is asking for something significant.
Most AI companies approach this problem with marketing. “Enterprise-grade security.” “SOC 2 certified.” “Trusted by leading firms.” These are words on a website. They are not trust.
Trust in regulated AI is built through architecture. Through decisions made at the infrastructure level that make trust verifiable rather than claimed. This article explains the specific decisions we have made at Mont Virtua and why.
Principle 1: Every Output Must Have a Source
This is the foundational principle. No exceptions.
When Enclava says “Article 271 OR requires that notice of lease termination be given using the form approved by the canton,” the user sees a direct link to Article 271 OR in Fedlex, the current version, the amendment history, and the relevant cantonal implementation. They can verify the statement in thirty seconds.
When the system says “The Federal Supreme Court addressed this in BGE 142 III 91,” the user can click through to the actual decision, read the relevant passage, and confirm the citation.
This is not a feature we added after building the system. It is the architecture. Enclava uses Retrieval-Augmented Generation, which means every response is generated from specific retrieved documents. The citation is not an afterthought. It is the mechanism by which the response was produced.
Why this matters: in regulated work, an uncited claim has no value. A lawyer cannot tell a client “AI says so.” A compliance officer cannot report to FINMA “our system indicated.” Every professional assertion must trace back to an authoritative source. Our system is designed so that this trace is always available.
What we deliberately avoid: generating responses from the model’s parametric knowledge. If the information is not in our verified database, the system says it does not have relevant information rather than generating a plausible-sounding answer from training data. We prefer a gap in coverage to a fabricated response.
Principle 2: The Data Must Come from Authoritative Sources
Our legal database is built exclusively from official government sources:
- Federal legislation from Fedlex (fedlex.data.admin.ch)
- Cantonal legislation from official cantonal law databases
- Federal Supreme Court decisions from the BGer portal
- Federal Administrative Court decisions from the BVGer portal
- FINMA circulars and guidance from FINMA’s official publications
- SHAB entries from the Swiss Official Gazette of Commerce
- Curia Vista parliamentary data from the Federal Assembly
We do not scrape legal blogs. We do not ingest legal commentary (unless explicitly licensed). We do not use crowd-sourced legal information. We do not rely on the model’s training data for legal content.
Every data source has a documented provenance chain: where the data comes from, how it is ingested, how it is parsed, and how it is verified against the source. This documentation is available to clients during due diligence.
Why this matters: the quality of an AI system’s output is bounded by the quality of its input data. A system trained on internet-scraped legal content will contain errors, outdated provisions, and unofficial interpretations. A system built on official government sources has the same authority as the sources themselves.
The verification step: after ingestion and parsing, we run hash comparisons against the source to confirm that no content has been altered during processing. We track version numbers and effective dates to ensure users always see the current version of a provision. When a provision is amended, the system reflects the change and maintains access to the previous version with appropriate dating.
Principle 3: Swiss Infrastructure, Swiss Control
Enclava runs on Swiss infrastructure. Not “Swiss data center operated by a US company.” Swiss infrastructure operated by a Swiss company.
The specifics:
- Corporate entity: Mont Virtua GmbH, incorporated in Zug, Switzerland. No US parent company, subsidiary, or controlling shareholder.
- Compute infrastructure: Swiss GPU cloud (Exoscale, with data centers in Geneva and Zurich). No AWS, Azure, or GCP dependencies.
- AI models: Open-source models (Llama, Mistral, Qwen) deployed on Swiss infrastructure. No API calls to US model providers. No data leaves Switzerland for inference.
- Database: PostgreSQL with pgvector, hosted in Switzerland. All client data, query logs, and system data reside on Swiss servers.
This configuration provides genuine data sovereignty. No US CLOUD Act exposure. No foreign government access. No data transfers outside Switzerland.
We publish our infrastructure architecture because transparency builds trust more effectively than assertions. Clients and their compliance teams can verify our hosting configuration, our corporate structure, and our model deployment independently.
Principle 4: Open-Source Models, Not Black Boxes
We use open-source AI models exclusively. This is a deliberate choice with several implications.
Auditability. When a regulator, client, or auditor asks “how does your AI make decisions?”, we can show them the model architecture, the model weights, and the inference pipeline. With proprietary models (GPT-4, Claude, Gemini), the answer is “we do not know, and neither does the vendor.” For high-risk AI applications under the EU AI Act, this difference is the difference between compliance and non-compliance.
No vendor lock-in. If a particular open-source model is discontinued, we switch to another. Our value is in the data layer, the retrieval infrastructure, and the delivery platform, not in any specific model. This protects us and our clients from model-provider risk.
No data sharing with model providers. When you use a proprietary model through an API, your queries go to the model provider’s servers. Even with enterprise agreements that prohibit training on your data, the provider processes your queries on their infrastructure. With locally deployed open-source models, no data leaves our infrastructure. The model runs on our servers, processes queries on our servers, and returns results from our servers.
Cost predictability. Proprietary model APIs charge per token. As usage scales, costs scale linearly. Open-source models deployed on owned or leased infrastructure have fixed costs regardless of usage. This enables us to offer predictable pricing to clients and to run computationally intensive processes (like continuous re-indexing and enrichment) without worrying about per-query costs.
Principle 5: Complete Audit Trails
Every interaction with Enclava is logged:
- What query was submitted
- What documents were retrieved
- What ranking scores they received
- What context was provided to the model
- What response was generated
- What citations were included
- When the interaction occurred
- Which user account was involved
This audit trail serves multiple purposes:
Regulatory compliance. When FINMA, a cantonal data protection authority, or an EU AI Act supervisor asks how a particular output was produced, the institution can reconstruct the entire chain from query to response.
Professional accountability. When a lawyer cites information from Enclava in a client memo, and the client or opposing counsel challenges the citation, the lawyer can demonstrate exactly what the system retrieved, what sources were used, and what the system’s output was. The professional’s judgment in relying on the output is documented.
System improvement. Audit trails enable us to identify retrieval failures, ranking errors, and user patterns that indicate the system is not meeting expectations. Continuous improvement requires continuous measurement.
Client confidence. Knowing that every interaction is logged and auditable gives compliance teams the assurance they need to approve the tool for regulated work. The common objection that “we cannot verify what the AI did” is addressed at the architecture level.
Principle 6: Human Oversight by Design
Enclava is designed as a tool for professionals, not a replacement for them. This is reflected in the product architecture:
- Outputs are presented with sources for verification, not as authoritative statements
- The interface encourages review and cross-checking, not blind acceptance
- Administrative controls allow firms to restrict which types of queries can be processed
- No automated actions: the system retrieves, synthesizes, and presents. The professional decides
This is not a limitation. It is a feature. In regulated industries, the human-in-the-loop is not just best practice. It is a legal requirement under the EU AI Act for high-risk systems and an expectation under FINMA’s supervisory framework.
We believe the most effective AI systems are those that make professionals better at their jobs, not those that try to do the job for them. The judgment, the client relationship, the ethical obligations, and the strategic thinking remain with the professional. The research, retrieval, and synthesis become faster and more comprehensive.
Why We Publish This
Most AI companies protect their architecture as proprietary information. We publish ours because in regulated industries, opacity is the enemy of trust.
If our clients and their compliance teams cannot verify how our system works, they cannot trust it. And they should not trust it. Trust based on a sales pitch is not trust. Trust based on verifiable architecture is.
This is the approach we have taken at Mont Virtua. Verifiable sources, sovereign infrastructure, open-source models, complete audit trails, and human oversight by design. Not because these are easy choices, but because they are the right ones for the markets we serve.
To see this approach in practice, visit enclava.ch.