A lawyer asks ChatGPT about a Federal Supreme Court ruling on tenancy law. The system delivers a detailed answer with a case number, date, and summary. Everything sounds plausible. The only problem: the ruling does not exist. The case number is fabricated. The date belongs to an entirely different case. The lawyer who fails to verify the answer cites the fictitious ruling in a legal brief.
This is not a theoretical warning. It is a documented pattern. In the United States, lawyers have been sanctioned for citing AI-generated, non-existent judgments. In Switzerland, comparable cases have not yet become public, but the conditions for such errors exist whenever general AI models are used for regulated professional work.
The Hallucination Problem
Large language models such as GPT-4, Claude, and Gemini are impressive technologies. They can summarise texts, translate languages, write code, and answer complex questions. But they have a fundamental characteristic that becomes a problem for regulated industries: they make things up.
This does not happen out of malice or carelessness. It is a technical property of the architecture. Language models generate text by predicting the most probable next word based on patterns in their training data. When the training data contains no information on a specific question, the model generates an answer anyway. It does not say “I don’t know.” It produces text that sounds statistically plausible but may be factually wrong.
In everyday use, this is often acceptable. If a marketing text is not stylistically perfect or a summary misses a nuance, the consequences are minor. In regulated industries, the consequences are potentially severe.
In law: A false citation can invalidate a legal brief, jeopardise a case, and trigger professional disciplinary consequences.
In financial advisory: An incorrect regulatory assessment can lead to compliance violations punished with heavy fines.
In fiduciary services: An incorrect tax calculation can trigger back taxes, default interest, and penalty taxes.
In pharma: A misinterpretation of an authorisation requirement can jeopardise the entire market approval process.
The cost of a hallucination far outweighs the benefit of the AI.
Three Deficiencies of General AI Models
Hallucination is the most prominent problem, but not the only one. For regulated industries, general AI models have three fundamental deficiencies.
Deficiency 1: No source attribution. When a general language model provides an answer, it cannot say where the information comes from. It cannot point to a specific statutory article, a specific court ruling, or a particular FINMA circular. For professionals who must substantiate their work with sources, this is a disqualifier. A legal opinion without source references is not a legal opinion. A tax calculation without reference to the applicable provision is not verifiable. A compliance report without documented foundations is worthless.
Deficiency 2: Outdated information. Language models are trained on a fixed dataset. After training, the model learns nothing new. The tax law amended in January 2026 is invisible to a model with a training data cutoff of mid-2025. The new FINMA circular, the current Federal Supreme Court practice, the revised ordinance: none of it exists for the model. In an industry where the timeliness of the legal basis is decisive, this deficiency makes general models unreliable.
Deficiency 3: No auditability. General AI models are black boxes. They cannot explain how they arrived at an answer. They cannot disclose their decision-making process. They cannot document a chain of evidence from input to output. For regulated industries, where auditability and traceability are core requirements, this is unacceptable. The EU AI Act will make this requirement binding for high-risk AI systems from August 2026.
What Verified AI Does Differently
Verified AI is not a marketing term. It is an architectural principle. It describes AI systems built so that every output can be traced back to verified sources.
Verified data foundation. Instead of relying on unstructured training data from the internet, verified AI works with a curated knowledge base of authoritative sources. For Swiss law, this means: federal legislation from Fedlex, cantonal laws from official sources, court decisions from the Federal Supreme Court and cantonal courts, regulatory publications from FINMA, SECO, and other authorities. Every source is documented, every update is traceable.
Source attribution on every output. Every answer from the system includes references to the specific sources on which it is based. The user can access the original source and verify the accuracy of the answer. No reliance on plausibility. Verification against the source.
Continuous updates. The knowledge base is continuously updated. New laws, new court decisions, new regulatory provisions are captured and integrated into the database. The system always works with the current state of the legal foundations.
Structured data. Laws are not simply text. They have a hierarchical structure: books, titles, chapters, articles, paragraphs. Court decisions have metadata: court, date, case number, legal area, applied provisions. Verified AI preserves this structure and enables precise queries at the right level of granularity.
Auditability. Every query, every retrieval, every generated output is logged. The chain of evidence from input to output is documented. For audits, for internal quality control, for accountability towards clients and authorities.
The Difference in Practice
Let us compare the same use case with general AI and with verified AI.
The question: “What are the deadlines for contesting a tax assessment in the Canton of Zurich?”
General AI: Generates an answer that may be correct. May be outdated. May refer to the wrong canton. No source references. The professional must manually verify the entire answer, which often takes longer than researching from scratch.
Verified AI: Retrieves the relevant provisions from the Tax Act of the Canton of Zurich, references the specific paragraphs, shows the deadlines with exact source citations, links to the current statutory text, and lists relevant Federal Supreme Court rulings on deadline calculation. The professional can use the answer immediately and check the sources in the original if needed.
The difference is not incremental. It is fundamental. In the first case, the AI is a risk factor. In the second case, it is a productivity tool.
Regulatory Requirements Reinforce the Trend
The regulatory trajectory confirms that verified AI is not a luxury but a necessity.
EU AI Act. From August 2026, high-risk AI systems must be transparent, auditable, and documented. General language models operating as black boxes structurally fail to meet these requirements. Companies using AI for regulated applications will need to transition to systems that satisfy the requirements for auditability and transparency.
FADP. The Swiss Federal Act on Data Protection requires transparency in automated individual decisions and the possibility for data subjects to present their point of view. When an AI system provides the basis for a decision, it must be traceable on what data the recommendation is based.
Industry regulation. FINMA, Swissmedic, and other industry regulators are increasingly setting requirements for the use of AI in their supervisory domains. Auditability, traceability, and documented data foundations are becoming standard requirements.
The Strategic Decision
For companies in regulated industries, the question is not whether to adopt AI. The efficiency gains are too significant to ignore. The question is what kind of AI to adopt.
General AI models offer broad functionality at low cost, but they create compliance risks, require manual verification, and produce no auditable results. Verified AI requires specialised infrastructure but delivers the source attribution, timeliness, and auditability that regulated industries need.
Companies that invest in verified AI today are building an advantage. They can use AI productively without incurring compliance risks. They can deliver traceable results to their clients. They are prepared for tomorrow’s regulatory requirements.
Enclava is the Swiss platform for verified AI in regulated industries. Source-attributed knowledge bases, hosted in Switzerland, with over 27,000 laws and 1.1 million court decisions. Every output is traceable. Every source is verified. Every process is auditable.
Learn more at enclava.ch or contact us at [email protected].