Why ChatGPT Can't Give You Legal Advice (And What Can)

OpenAI, Anthropic, and Google have all banned their models from giving personalized legal advice. Here's why general-purpose AI falls short for legal work, and what the alternatives are.

Ask ChatGPT a legal question and you will get a confident, well-structured answer. It will cite principles, reference concepts, and sound like it knows what it is talking about. Then it will add a disclaimer: “I’m not a lawyer, and this is not legal advice.”

That disclaimer is not politeness. It is policy. And behind it lies a fundamental problem that most professionals have not fully grasped.

The Ban You Did Not Notice

In October 2025, OpenAI updated its usage policies to explicitly prohibit the use of ChatGPT for personalized legal, medical, and financial advice. Anthropic (Claude) and Google (Gemini) have similar restrictions. These are not suggestions. They are terms of service that, if violated, can result in account termination.

The reasoning is straightforward. General-purpose language models are trained on broad internet data. They do not have access to current legal databases. They cannot verify whether a law has been amended, repealed, or superseded. They do not know your jurisdiction, your specific circumstances, or the procedural requirements that might apply to your case.

When ChatGPT tells you about Swiss tenancy law, it might reference the OR (Code of Obligations) correctly at a general level. But it will not know that your canton has specific implementing provisions. It will not know about the recent Federal Supreme Court decision that changed the interpretation of Article 271. It will not flag that the notice period differs depending on when the lease was signed. It will not cite the actual article numbers with confidence, because it was not trained on a structured legal database. It was trained on the internet.

For a casual question, this is fine. For a professional relying on the answer, it is dangerous.

The Hallucination Problem

Language models generate text by predicting the most likely next word in a sequence. They are optimized for fluency, not accuracy. This creates a specific failure mode in legal contexts: confident fabrication.

Studies have documented this extensively. In 2023, a New York attorney submitted a court filing containing six fabricated case citations generated by ChatGPT. The cases did not exist. The courts did not exist. The model had invented them because they sounded plausible. The attorney was sanctioned.

This is not a rare edge case. Research from Stanford’s Human-Centered AI Institute found that legal AI tools hallucinate between 17% and 33% of the time when generating case citations. General-purpose models perform worse than specialized legal tools, but even specialized tools have significant error rates.

In Swiss law, the problem is compounded by multilingualism. Federal laws exist in German, French, and Italian, with each language version being authoritative. A model trained primarily on English-language data will have shallow coverage of Swiss legal texts, particularly cantonal law and lower court decisions that are less likely to appear in its training data.

What Professionals Actually Need

The gap between what ChatGPT offers and what legal professionals need is not about intelligence. It is about infrastructure.

Current, verified sources. Swiss law changes constantly. Federal laws are amended through parliamentary process. Cantonal regulations update on different schedules. Court decisions create new interpretations. FINMA circulars are revised. A useful legal AI must work from a database that is updated continuously and verified against official government sources.

Source attribution. When a professional cites a legal provision, they need the exact article, paragraph, and subparagraph. They need the SR number. They need to know if the provision has been amended since their last check. A system that says “according to Swiss law” without pointing to the specific provision is useless in professional practice.

Jurisdictional awareness. Switzerland has 26 cantons, each with its own procedural law, tax law, and administrative regulations. A question about building permits in Zurich has a fundamentally different answer than the same question in Ticino. General-purpose AI has no mechanism for handling this granularity.

Citation graphs. Legal provisions do not exist in isolation. A single article in the OR might be interpreted by dozens of Federal Supreme Court decisions, referenced in FINMA circulars, and modified by cantonal implementation ordinances. Professionals need to see these relationships. A flat list of search results is insufficient.

Multilingual coverage. A Lausanne attorney working in French needs the French-language version of the law, the French-language court decisions, and the ability to cross-reference with the German-language version when the French text is ambiguous. This is not a translation problem. It is a data architecture problem.

The market has recognized this gap. A new category of AI tools is emerging: domain-specific systems built from the ground up for legal work.

Harvey, valued at $8-11 billion, has become the most prominent example. It ingests law firm knowledge, connects to legal databases, and provides AI-assisted legal research with source attribution. Legora, valued at $1.8 billion, takes a similar approach for European legal markets. In the German-speaking world, Noxtua has secured exclusive rights to major legal commentaries.

These tools represent a significant improvement over general-purpose AI. They work from verified legal databases. They provide citations. They understand legal structure and hierarchy.

But they share a limitation: they are single-sector tools. Harvey does legal. That is it. When a lawyer encounters a tax question embedded in a contract review, or a compliance issue that spans banking regulation and corporate law, they hit the boundary of what a legal-only tool can do.

What Would Actually Solve the Problem

The solution is not a better chatbot. It is a different architecture altogether: domain-specific AI that works from structured, verified, continuously updated knowledge bases covering the full range of regulated knowledge.

This is the approach known as Retrieval-Augmented Generation, or RAG. Instead of relying on a model’s training data (which is static, unverified, and incomplete), RAG systems retrieve relevant information from curated databases before generating a response. Every answer is grounded in a specific source document. Every citation can be verified. Every claim can be traced.

For Swiss legal work, this means building from the actual data: all 27,795 federal and cantonal laws, over a million court decisions from 115 courts, 1.4 million citation edges connecting laws to decisions and decisions to other decisions, the SHAB (Swiss Official Gazette of Commerce), FINMA regulatory publications, and cantonal administrative guidelines.

When a RAG system answers a question about tenancy law in Zurich, it retrieves the relevant OR provisions, the applicable cantonal tenancy law, recent BGer (Federal Supreme Court) decisions on point, and any pending legislative changes. The answer includes specific citations that the professional can verify. If the system does not have relevant information, it says so rather than fabricating an answer.

The Multi-Sector Dimension

Real professional work does not respect sector boundaries. A corporate acquisition involves corporate law, tax law, employment law, competition law, and potentially banking regulation. A real estate transaction touches property law, planning law, environmental law, and tax law.

Single-sector legal AI tools force professionals back to manual research the moment they step outside the legal domain. Multi-sector AI platforms that connect legal, tax, compliance, and regulatory knowledge create a fundamentally different workflow: one where cross-domain questions get cross-domain answers, with verified sources across all relevant jurisdictions.

Where This Is Headed

The market is moving fast. Within two years, using unverified AI for professional legal work will be seen the way we now see unverified financial data: as malpractice waiting to happen. The EU AI Act (effective August 2026) will accelerate this by requiring transparency and auditability for high-risk AI systems, which includes legal decision support.

Professionals who adopt verified, domain-specific AI tools now will build workflows that their competitors will spend years catching up to. Those who continue relying on ChatGPT for legal research are building on a foundation that was never designed to support the weight.

Mont Virtua’s Enclava platform delivers exactly this: verified AI knowledge bases for regulated industries, starting with Swiss law and FINMA compliance. Every answer is sourced. Every citation is real. Every system is hosted in Switzerland. Visit enclava.ch to see the difference between general-purpose AI and purpose-built legal intelligence.

Back to Insights