What the EU AI Act Means for Swiss Fiduciaries

The EU AI Act does not only affect tech companies. Fiduciaries who use AI tools or serve clients with EU ties must act. A practical guide.

What the EU AI Act Means for Swiss Fiduciaries

On 2 August 2026, the EU AI Act takes full effect. This concerns European companies. But not only them.

If you serve clients with EU business as a fiduciary, use AI-powered tools for your work, or provide services whose outputs are used in the EU, it concerns you too. Not theoretically. Concretely.

This article explains what obligations fiduciary firms face, which scenarios are relevant, and what you should do in the next 18 weeks.

Why Fiduciaries Are Affected

Scenario 1: You Use AI Tools in Your Work

Abacus has integrated AI features. Bexio is working on it. You use ChatGPT, Copilot, or another LLM for summaries, letter drafts, or research. Perhaps you use an AI-powered tool for accounting reviews.

What the EU AI Act says: As a “deployer” (user of an AI system), you have obligations under Art. 26 if:

  • The AI system was placed on the EU market (most SaaS tools are)
  • The output of the AI system is used in the EU (e.g., a report for an EU client)

Deployer obligations:

  • Use in accordance with manufacturer instructions (Art. 26(1))
  • Human oversight by competent persons (Art. 26(2))
  • Ensure relevant input data (Art. 26(4))
  • Retain logs (Art. 26(5))
  • Inform employees exposed to the system (Art. 26(7))

Scenario 2: Your Clients Use AI

Your clients increasingly ask: “Are we allowed to do this?” An industrial company wants automated quality control with image recognition. An online retailer uses a recommendation engine. An HR provider wants AI-assisted applicant screening.

What the EU AI Act requires: Depending on the risk classification of the AI system, your clients must:

  • Maintain an AI inventory
  • Register high-risk systems (Art. 49)
  • Prepare technical documentation (Art. 11)
  • Establish a quality management system (Art. 17)
  • Conduct a conformity assessment (Art. 43)

As a fiduciary, you are often the first point of contact for compliance questions. When clients ask what they need to do, you should have a well-founded answer.

You prepare tax returns for cross-border commuters. You advise holding companies with EU subsidiaries. You use AI-powered software to calculate intercantonal or international tax consequences.

What becomes relevant: If the output of your AI-assisted work is used in an EU context (e.g., for tax planning of an EU subsidiary), the system may fall under Art. 2(1)(c): “Providers or deployers of AI systems, […] insofar as the output produced by the AI system is used in the Union.”

The Risk Pyramid for Fiduciaries

The EU AI Act categorises AI systems into four levels:

Prohibited (Art. 5)

Generally not relevant for fiduciaries. Covers: social scoring, subliminal manipulation, biometric mass surveillance.

High-Risk (Art. 6, Annex III)

Relevant for fiduciaries:

  • AI systems for creditworthiness assessment (if you support clients with this)
  • AI systems for HR decisions (applicant screening, performance evaluation)
  • AI systems that influence access to essential services

High-risk obligations:

  • Risk management system (Art. 9)
  • Data governance (Art. 10)
  • Technical documentation (Art. 11)
  • Automatic logging (Art. 12)
  • Transparency (Art. 13)
  • Human oversight (Art. 14)
  • Accuracy, robustness, cybersecurity (Art. 15)

Limited (Art. 50)

Most common for fiduciaries:

  • Chatbots and AI assistants: transparency obligation (user must know they are interacting with AI)
  • AI-generated texts: labelling obligation when shared with clients
  • Emotion recognition or biometric categorisation: information for affected persons

Minimal

  • Pure auxiliary tools (spell-check, calendar AI, spam filters): no specific obligations. Recommendation: inventory them anyway.

What Fiduciaries Should Do Now

Step 1: Create an Inventory (This Week)

List all AI systems in use at your firm. Including the unofficial ones. Including those that individual employees use independently.

Checklist:

  • Which software with AI features do we use? (Abacus, Bexio, Microsoft 365 Copilot, etc.)
  • Does anyone use ChatGPT, Claude, Gemini, or other LLMs for work?
  • Do we have AI-powered analysis tools in use? (Benchmarking, risk assessment, auditing)
  • Do third-party providers whose services we use offer AI features?

Step 2: Check EU Relevance (Next Week)

For each identified AI system:

  • Is the output used for or by EU clients?
  • Does the system process data of EU persons?
  • Is the provider of the system active in the EU?

If any of these questions is answered with yes, the EU AI Act is relevant.

Step 3: Risk Classification (Weeks 3-4)

For each relevant system:

  • Which risk category does it fall into?
  • Are we a provider or a deployer (user)?
  • What specific obligations arise?

Step 4: Compliance Measures (Weeks 5-8)

For high-risk systems:

  • Create documentation or request it from the provider
  • Define processes for human oversight
  • Set up logging

For limited risks:

  • Implement transparency notices
  • Inform clients when AI outputs contribute to their work

Step 5: Prepare Client Advisory (Ongoing)

Your clients will ask. Prepare yourself:

  • Standard answers for the most common scenarios
  • Referral to specialised advice for complex cases
  • Checklist for clients who use AI

The FADP Connection

The EU AI Act does not stand alone. It complements existing data protection law. For Swiss fiduciaries, this means a dual burden:

FADP (SR 235.1):

  • Art. 19(4): duty to inform in case of automated individual decisions
  • Art. 21: right to disclosure of the logic behind automated individual decisions
  • Art. 22: data protection impact assessment where there is a high risk

EU AI Act:

  • Art. 26: deployer obligations (partly overlapping with FADP)
  • Art. 50: transparency obligations (supplementary to FADP)
  • Art. 86: right to explanation for high-risk systems

The overlap is intentional. The EU Commission designed the AI Act as a complement to the GDPR. For Swiss firms that must comply with both sets of rules, an integrated compliance approach is recommended: one inventory, one process, one set of documentation covering both.

What Happens in Case of Violation?

EU AI Act:

  • Prohibited practices: up to EUR 35 million or 7% of global annual turnover (Art. 99(3))
  • High-risk violations: up to EUR 15 million or 3% of turnover (Art. 99(4))
  • False information: up to EUR 7.5 million or 1% of turnover (Art. 99(5))
  • For SMEs: reduced caps (Art. 99(6))

FADP:

  • Up to CHF 250,000 against natural persons (not the firm, but the responsible individual)
  • Art. 60: violation of the duty to inform
  • Art. 61: violation of the duty of care when processing data through third parties

The fines are scaled to company size. For a fiduciary firm with 20 employees, the EU maximum penalties are theoretical. But even an audit with conditions can be operationally burdensome.

The Opportunity

Compliance is effort. But for fiduciaries, the EU AI Act is also an opportunity. Your clients need help. Most SMEs will not read the 180 pages of the regulation themselves. They will ask their fiduciary.

Those who can provide competent advice here position themselves as the contact for a topic that grows more important every year. AI compliance will not disappear. It will increase.

The question is not whether the EU AI Act affects fiduciaries. The question is whether you are prepared when the first client asks.


This article is for informational purposes and does not constitute legal advice. For implementing regulatory requirements, we recommend consulting qualified professionals.

Back to Insights

Related articles