Why AI Works Differently in Regulated Industries

Frontier AI models refuse legal, medical, and financial advice. Why regulated industries need specialised, source-verified systems.

Why AI Works Differently in Regulated Industries

In October 2025, OpenAI updated its terms of use. Since then, ChatGPT may no longer give personalised legal, medical, or financial recommendations. Anthropic has similar restrictions. Google Gemini likewise.

This is not an oversight. It is a deliberate decision by the world’s largest AI companies. And it has far-reaching consequences for anyone wanting to deploy AI in a regulated industry.

What Frontier Models Will Not Do

The large language models (GPT-4, Claude, Gemini) are impressively versatile. They write code, summarise texts, translate, analyse data. But they have a structural limit: they refuse domain-specific advice in regulated areas.

Ask ChatGPT: “Was my employment contract validly terminated under Art. 336c CO?” The answer: “I am not a lawyer and cannot provide legal advice. Please consult a legal professional.”

Ask Claude: “Which FINMA circulars affect my outsourcing arrangement?” The answer will remain general and refer you to consult qualified professionals.

This is rational. For the providers of these models, the liability risk is too great. One piece of incorrect legal advice leading to harm could trigger billion-dollar lawsuits. So they build in safeguards.

The Partnership Strategy Model

What do these companies do instead? They enter partnerships with domain experts.

OpenAI has announced partnerships with Thomson Reuters (legal information), the Financial Times (news content), and various medical data providers. Anthropic works with specialised providers supplying verified data sources.

The model is clear: the frontier AI provides language processing. The domain partner provides the verified data. Together, they create a system that is both linguistically competent and substantively correct.

But: who is the domain partner for Swiss law? For Swiss tax law? For FINMA regulation?

Why General AI Fails in Regulated Industries

It is not just about terms of use. There are structural reasons why a general language model is unsuitable for regulated professions.

No source verification. A language model generates text based on probabilities. It does not “know” whether an article exists. It “remembers” training text and produces plausible-sounding answers. Plausible is not sufficient in legal advice. Either Art. 336c CO says something specific, or it does not. There is no “probably.”

Training data lag. GPT-4 was trained with data up to a certain cutoff date. FINMA circulars published after that date do not exist for the model. Cantonal legislative amendments that came into force last week are not captured. In a world where regulation changes weekly, a model with months of delay is unusable for current questions.

No cantonal differentiation. A model trained on global data may know the general concept of “intercantonal double taxation.” But it does not know the specific practice of Canton Schwyz for holding companies compared to Canton Zug. This differentiation does not exist in the training data at sufficient depth.

Hallucinations. Language models produce content that is grammatically correct and contextually plausible but factually wrong. In programming, this is annoying. In legal advice, it is dangerous. A fabricated Federal Supreme Court judgment that a lawyer incorporates into a brief harms the client and the lawyer’s reputation.

What “Source-Verified” Means

The counter-model to general AI is a system based exclusively on verified, official sources. Not “the model believes that…” but “Art. 336c para. 1 lit. b CO reads: …” with a direct link to federal law on Fedlex.

A source-verified system has three properties:

Every answer references a verifiable source. No statutory text without a Fedlex link. No court decision without case number and decision date. No FINMA statement without document reference. The user can verify every statement in the original source.

The data comes from official government sources. Not from Wikipedia. Not from blog posts. Not from third-party summaries. Directly from Fedlex, the cantonal statute collections, the court APIs, the FINMA website. Primary sources, not secondary literature.

The data is continuously updated. Not annually. Not monthly. In the ideal configuration: daily. When Fedlex amends a statute, the amendment appears in the system the next day. When the Federal Supreme Court publishes a decision, it is captured within 24 hours.

The Role of AI in a Source-Verified System

If the data is verified, what does the AI do? Three things:

Search beyond keywords. Semantic search finds results that are substantively relevant, even if they do not contain the entered keywords. “Dismissal protection during illness” finds Art. 336c CO, even if the user does not know the article number. The AI understands the question; the source provides the answer.

Cross-linking across legal areas. A FINMA circular references the Banking Act. The Banking Act references the Banking Ordinance. A Federal Supreme Court decision interprets the ordinance. The AI makes these connections visible that exist in the sources as text references but require hours of manual research without technical assistance.

Summarisation and contextualisation. The AI can summarise a 30-page court decision in 5 paragraphs. But the summary references the original passage in the decision. The user can switch to the full text at any time.

Data Sovereignty as a Prerequisite

For Swiss lawyers, fiduciaries, and compliance officers, the question of where data resides is non-negotiable.

Art. 13 BGFA (Federal Act on the Free Movement of Lawyers) obliges lawyers to maintain professional secrecy. Sending a search query containing client data to a US server is a potential violation.

FADP Art. 16-17 governs cross-border disclosure of personal data. Every query to an AI API that contains personal data and is processed on a server outside Switzerland must meet the requirements for cross-border disclosure.

The US CLOUD Act gives US authorities access to data stored by US companies, regardless of storage location. “Data in Switzerland” at a US provider is not a guarantee.

A system that works with source-verified data AND is hosted in Switzerland structurally eliminates these risks. No data leaves Switzerland. No US provider has access. No cross-border disclosure needed.

Who Is Building This?

The market for specialised AI in regulated industries is growing explosively. In March 2026, 750 million dollars flowed into legal AI startups. Harvey (US) is valued at 11 billion dollars. Legora (formerly EvenUp) at 5.5 billion. Swiss-Noxtua is building a German-language legal AI.

But: Harvey focuses on US and UK law. Legora likewise. Swiss-Noxtua works through publishers, not through government sources. None of them offers the combination of Swiss law, Swiss tax law, FINMA regulation, and cantonal coverage.

The Swiss regulatory landscape is complex enough to justify a specialised provider. 26 cantons, four languages, a federal system with independent lawmaking at every level. A global provider cannot cover this. It requires local data infrastructure.

The EU AI Act as Catalyst

On 2 August 2026, the high-risk requirements of the EU AI Act take effect. Every Swiss company deploying AI systems in the EU market must present technical documentation, risk management systems, and conformity assessments. General AI models do not help with meeting these requirements. On the contrary: a system that hallucinates is a compliance risk.

Source-verified systems delivering traceable answers are the building block for AI compliance in regulated industries. Not because “verified” sounds good. But because regulation demands it.

Conclusion

Frontier AI models will get better. Faster. More versatile. But they will not take responsibility for telling you whether your employment contract was validly terminated, whether your bank correctly implements FINMA Circular 2023/1, or whether your client is being double-taxed across three cantons.

For these questions, specialised systems are needed. Systems based on official sources. Systems that make every answer verifiable. Systems that stay in Switzerland.

General AI solves general problems. Regulated industries have specific problems. The difference lies not in the AI. It lies in the data.

CTA: [Subscribe to our newsletter: analyses and updates on AI in regulated industries.]

Back to Insights

Related articles