EU AI Act: What Swiss Companies Must Do by August 2026
126 days left. On 2 August 2026, the requirements for high-risk AI systems under EU Regulation 2024/1689 take full effect. Swiss companies that deploy AI systems in the EU market or whose outputs are used in the EU are directly affected.
This article explains which provisions apply to Swiss firms, how to assess your own exposure, and which steps are needed now.
Switzerland Is Not in the EU. Why Does This Affect Us?
The EU AI Act has extraterritorial reach. Art. 2 of the regulation covers three scenarios that include Swiss companies:
Art. 2(1)(a): AI system is placed on the EU market. A Swiss software company selling an AI-powered product to EU customers is captured as a “provider” within the meaning of Art. 3(3). It does not matter where the company is headquartered.
Art. 2(1)(c): The outputs of the AI system are used in the EU. Even if a Swiss company has no direct EU customers: if the output of its AI system is used within the EU, the regulation applies. Example: a Swiss firm generates credit scores that an Austrian bank uses.
Art. 2(1)(d): Import or distribution. EU-based companies that import or distribute Swiss AI systems are also subject to the regulation and will contractually require compliance from their Swiss suppliers.
In practice: every Swiss company that sells AI-powered software or services to EU customers, operates an AI system serving EU users, or delivers AI-generated outputs that take effect in the EU falls within scope.
What Already Applies
Not all provisions only take effect in August. Since 2 February 2025, the following already apply:
-
Prohibited Practices (Art. 5): Social scoring by authorities, subliminal manipulation, exploitation of vulnerabilities, untargeted scraping of biometric data, emotion recognition in workplaces and educational institutions. Companies operating such systems are already violating applicable EU law.
-
AI Literacy (Art. 4): All companies deploying AI systems must ensure their staff has sufficient AI literacy. This obligation already applies.
The Risk Classification: Which Category Does Your System Fall Into?
The EU AI Act classifies AI systems into four tiers. Obligations depend directly on this classification.
Tier 1: Prohibited Systems (Art. 5)
Systems prohibited by Art. 5 may not be operated. Period. No exceptions for Swiss companies. The threatened fine: up to EUR 35 million or 7% of global annual turnover.
Tier 2: High-Risk Systems (Art. 6 + Annex III)
This category affects most Swiss companies deploying AI. Annex III defines eight areas:
- Biometrics: Remote identification, emotion recognition (non-prohibited applications)
- Critical Infrastructure: Safety components in transport, water, gas, electricity
- Education: Admissions decisions, exam grading, learning behaviour monitoring
- Employment: CV screening, interview assessment, promotion/dismissal decisions, task allocation, performance monitoring
- Access to Essential Services: Credit scoring, insurance risk assessment, social welfare eligibility
- Law Enforcement: Recidivism risk assessment, evidence analysis
- Migration: Risk assessment, document verification
- Administration of Justice: AI support for courts
Additionally: any AI system that profiles natural persons (Art. 6(2)) automatically qualifies as high-risk.
Exception Under Art. 6(3)
Even if a system falls within an Annex III area, it may be exempt from the high-risk classification if it poses no significant risk of harm and does not materially influence decision-making. Four conditions:
- (a) It performs a narrowly defined procedural task (e.g., document sorting)
- (b) It improves the result of a human activity already completed
- (c) It detects patterns without replacing human assessments
- (d) It performs a preparatory task (e.g., legal research, indexing)
This exception never applies when the system profiles natural persons. And: even when claiming the exception, the system must be registered in the EU database and the justification documented (Art. 49(2)).
Tier 3: Limited Risk (Art. 50)
Transparency obligations for:
- Chatbots: Must disclose that the user is interacting with an AI system
- AI-generated content: Must be machine-readably labelled
- Deepfakes: Must be clearly identifiable as artificially generated
Tier 4: Minimal Risk
No specific obligations. Voluntary codes of conduct recommended.
What High-Risk Systems Must Fulfil
For each high-risk system, these requirements must be met before it is placed on the EU market:
| Requirement | Article | Estimated Effort |
|---|---|---|
| Risk management system | Art. 9 | 2-4 weeks setup, ongoing |
| Data governance | Art. 10 | 2-3 weeks |
| Technical documentation | Art. 11 + Annex IV | 3-6 weeks |
| Logging | Art. 12 | 1-2 weeks |
| Transparency and usage instructions | Art. 13 | 1-2 weeks |
| Human oversight | Art. 14 | 1-2 weeks |
| Accuracy, robustness, cybersecurity | Art. 15 | 2-4 weeks |
| Quality management system | Art. 17 | 2-3 weeks |
| Conformity assessment | Art. 43 | 1-4 weeks |
| EU declaration of conformity | Art. 47 | 1 day (after completing all others) |
| CE marking | Art. 48 | 1 day |
| Registration in EU database | Art. 49 | 1 day |
Additionally for Swiss companies: an authorised representative in the EU must be designated (Art. 22). Without this representative, no high-risk system may be placed on the EU market.
The Fines
| Type of Violation | Maximum Fine | Alternative |
|---|---|---|
| Prohibited practices (Art. 5) | EUR 35,000,000 | 7% of global annual turnover |
| High-risk requirements (Art. 9-17) | EUR 15,000,000 | 3% of annual turnover |
| False information to authorities | EUR 7,500,000 | 1% of annual turnover |
For SMEs, the lower amount applies (Art. 99(6)). A Swiss SME with EUR 5 million in revenue risks a maximum of EUR 150,000 for violation of high-risk requirements (3% of EUR 5 million).
FADP and EU AI Act: Dual Compliance
Swiss companies deploying AI and processing personal data must comply with both sets of rules simultaneously. The key overlaps:
- Automated individual decisions: FADP Art. 21 requires information and the right to human review. EU AI Act Art. 14 requires human oversight by design. Both apply.
- Data protection impact assessment: FADP Art. 22 (DPIA) and EU AI Act Art. 27 (fundamental rights impact assessment) are both required but not identical. They do not replace each other.
- Transparency: FADP Art. 19 (duty to inform) and Art. 13 EU AI Act (usage instructions) have different requirements. Both must be fulfilled.
The 18-Week Roadmap
Those who start today have 18 weeks until 2 August 2026. That is tight, but feasible.
Weeks 1-4: Stocktaking. Inventory all AI systems. Conduct risk classification for each system. Gap analysis against the requirements.
Weeks 5-10: Documentation. Prepare technical documentation per Annex IV. Data governance documentation. Build risk management system.
Weeks 11-14: Assessment. Internal testing (accuracy, robustness, bias). Quality management system. Conformity assessment.
Weeks 15-18: Completion. CE marking. EU database registration. Activate post-market monitoring. Complete training.
Companies that only start in May have a genuine problem. 12 weeks are barely sufficient for the documentation alone.
What Mont Virtua Offers
We deliver a structured compliance analysis: risk classification of your AI systems, gap analysis against all relevant requirements, and a prioritised action plan. Based on the regulation text, not on assumptions. Delivery in 10 working days.
This is a technical compliance analysis. Not legal advice.
CTA: [Subscribe to our newsletter: regulatory updates on the EU AI Act delivered directly to your inbox.]