EU AI Act: What Swiss Companies Need to Know Now

The EU AI Act will be fully enforced from August 2026. Swiss companies with EU business must prepare. Deadlines, obligations, and concrete steps for compliance.

The EU AI Act is the world’s first comprehensive law regulating artificial intelligence. It does not only affect companies headquartered in the EU. Swiss companies that supply products or services to the European market, deploy AI systems that affect EU citizens, or process data from the EU fall within its scope. Enforcement of the key provisions begins on 2 August 2026. The time to act is now.

The EU AI Act at a Glance

The law was adopted by the European Parliament in June 2024 and entered into force on 1 August 2024. It follows a risk-based approach: the higher the risk posed by an AI system, the stricter the requirements.

The four risk levels at a glance:

Unacceptable risk. These AI systems are prohibited. They include government social scoring, real-time biometric surveillance in public spaces, and AI that manipulates human behaviour. This category is not relevant for most Swiss companies.

High risk. AI systems deployed in safety-critical areas or for decisions affecting fundamental rights. These include AI in recruitment, creditworthiness assessments, insurance evaluations, legal proceedings, educational admissions, and law enforcement. These systems are subject to strict requirements for documentation, transparency, human oversight, and quality management.

Limited risk. Chatbots, AI-generated content, and emotion recognition. Transparency obligations apply here: users must know they are interacting with an AI.

Minimal risk. Spam filters, recommendation algorithms, and similar systems. No specific obligations beyond existing law.

The Deadlines and What They Mean

Implementation proceeds in three phases:

February 2025 (already in force). Prohibited AI practices are banned. Companies using social scoring systems or manipulative AI must have shut them down.

August 2026 (the critical deadline). All requirements for high-risk AI systems take effect. Providers must have completed conformity assessments, produced technical documentation, operated quality management systems, and implemented post-market monitoring plans. Deployers of high-risk systems must ensure human oversight and inform users transparently.

August 2027. Requirements for general-purpose AI models are enforced. This primarily affects providers of foundation models.

For Swiss companies, August 2026 is the decisive date.

Why Swiss Companies Are Affected

Switzerland has not transposed the EU AI Act into national law. Nevertheless, Swiss companies are directly affected for several reasons.

Extraterritorial reach. The EU AI Act applies to any provider or deployer of AI systems whose output is used in the EU. A Swiss law firm producing AI-assisted legal analyses for EU clients falls under the law just as much as a Swiss fintech whose algorithm influences credit decisions for EU citizens. The model is identical to the GDPR, which has applied to Swiss companies with EU business since 2018.

Client expectations. EU companies will increasingly demand AI Act compliance from their Swiss suppliers and partners. Those who cannot demonstrate compliance will lose contracts. This particularly affects sectors such as financial services, pharma, legal advisory, and management consulting.

Swiss regulation will follow. The Federal Council published a report on AI and regulation in November 2023. A standalone Swiss AI regulation is foreseeable, and it will align with the EU AI Act, just as the FADP aligned with the GDPR. Companies that prepare now will avoid doing the work twice.

Market access. The Swiss economy is deeply integrated into the EU single market. Bilateral agreements and mutual recognition depend on Swiss standards remaining compatible with EU standards. AI systems that do not meet EU requirements jeopardise market access in regulated sectors.

What Obligations You Specifically Face

Your obligations depend on whether you are a provider or a deployer of an AI system.

As a provider of an AI system that you have developed yourself or market under your own name, you must establish a comprehensive quality management system. This includes technical documentation describing how the system works, what data it was trained on, and what performance metrics it achieves. You must conduct a conformity assessment and register the system in the EU database. Post-market monitoring is mandatory: you must continuously monitor the system’s performance after deployment.

As a deployer of an AI system obtained from a third-party provider and used within your organisation, you must ensure the system is used in accordance with the provider’s instructions. You must organise human oversight, verify input data for relevance and quality, and report incidents to the provider and the competent authority. If you use a high-risk system for decisions about natural persons, affected individuals must have the right to request an explanation of the decision.

Fines and Sanctions

The sanctions are substantial and graduated by severity of the violation:

  • Use of prohibited AI practices: up to EUR 35 million or 7% of global annual turnover
  • Violation of high-risk requirements: up to EUR 15 million or 3% of turnover
  • False statements to authorities: up to EUR 7.5 million or 1.5% of turnover

Reduced caps apply to SMEs and startups, but the fines remain significant.

A Practical Roadmap for Preparation

The following steps help Swiss companies systematically prepare for August 2026.

Create an AI inventory. List all AI systems in your organisation. This includes not only in-house systems but also purchased tools, SaaS solutions with AI features, and APIs from external providers. Many companies underestimate the number of AI systems they have in operation.

Conduct a risk classification. Assign each system to one of the four risk levels. Pay particular attention to AI used in decisions about individuals: recruitment processes, credit checks, insurance applications, legal advisory, medical triage.

Gap analysis for high-risk systems. For each high-risk system, check whether the law’s requirements are met: technical documentation, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity.

Review supplier contracts. If you use third-party AI tools, ensure your contracts include the necessary transparency and cooperation obligations. Can your providers deliver the documentation you need for your own compliance?

Build documentation. The EU AI Act is documentation-intensive. Start early with the required records: risk analyses, quality management records, monitoring reports, incident logs.

Training and awareness. All employees who operate AI systems or use their outputs must have a sufficient understanding of the technology and its limitations. The EU AI Act explicitly requires “AI literacy” among deployers.

The Right Infrastructure as a Foundation

Compliance starts with choosing the right AI infrastructure. General AI tools such as ChatGPT or Gemini are useful for many purposes, but they do not meet the EU AI Act’s requirements for transparency, auditability, and data sovereignty. For regulated applications, companies need AI systems where every output can be traced back to its source, where the data is documented and verified, and where the infrastructure meets the requirements for data protection and sovereignty.

Enclava was built precisely for these requirements. As a Swiss AI platform for regulated industries, Enclava offers source-attributed knowledge bases, fully hosted in Switzerland, with complete documentation and auditability. Every output can be traced to the source document. Every data source is documented and verified.

If you want to prepare your organisation for the requirements of the EU AI Act, the right infrastructure is the first step. Learn more at enclava.ch or contact us at [email protected].

Back to Insights

Related articles