FINMA Guidance 08/2024: AI Governance for Swiss Financial Institutions
With Circular 08/2024, FINMA has for the first time created a concrete framework for the use of artificial intelligence in supervised institutions. What was previously considered best practice is now binding. For compliance officers in banks and insurers, this means: structure, document, demonstrate.
This article breaks the circular down into its practical components. No summary of a summary paper. Concrete requirements, concrete steps.
What FINMA Requires: Five Pillars
1. AI Inventory
Every supervised institution must maintain a complete inventory of all AI systems. This is simpler in theory than in practice.
An AI inventory to FINMA standards includes:
- System name and description: What does the system do? In which business process is it embedded?
- Classification: Decision-supporting or decision-autonomous? Customer-facing or internal?
- Risk rating: Based on the impact of malfunction. Automated credit assessment has a different risk profile than an internal HR chatbot.
- Data sources: What data flows in? Personal data? Financial data? Customer data?
- Responsibilities: Who is responsible functionally? Who technically? Who granted approval?
- Last review: When was the system last validated?
FINMA expects this inventory to be kept current. Not once a year. Continuously.
Practical challenge: In most institutions, nobody knows precisely how many AI systems are in use. IT department spreadsheets capture the official tools. But the analyst using ChatGPT for summaries, or the team that integrated an external scoring service, do not appear there. FINMA means everything. Including AI embedded in purchased software.
2. Risk Assessment and Classification
FINMA works with a three-tier risk model for AI systems:
High Risk:
- Systems with direct impact on customers (credit decisions, pricing, risk assessment)
- Systems in anti-money laundering (AMLA Art. 3-7 due diligence obligations)
- Systems that generate regulatory reports (FINMA filings, capital adequacy calculations)
Medium Risk:
- Decision-support systems (analysis tools that feed into human decisions)
- Internal automation affecting employees (HR screening, performance evaluation)
Low Risk:
- Purely internal auxiliary tools with no direct influence on decisions (text summarisation, translation)
The classification determines the documentation effort. High-risk systems require full validation before deployment and regular re-validations. Low-risk systems get by with initial documentation.
3. Model Governance
For every AI system with medium or high risk, FINMA requires a documented governance framework:
Before Deployment (Pre-Deployment):
- Test results documented
- Bias review conducted (especially for customer-facing decisions)
- Data protection impact assessment under FADP Art. 22 (if personal data is involved)
- Approval by responsible management
During Operation (Post-Deployment):
- Monitoring metrics defined and tracked
- Drift detection: is model quality deteriorating over time?
- Incident process: what happens when the system produces incorrect results?
- Regular revalidation (FINMA recommends at least annually for high-risk)
Documentation obligation: FINMA expects the entire model governance to be auditable at any time. During an on-site examination, the institution must be able to demonstrate that every AI system has gone through the defined process.
4. Outsourcing and Third-Party Providers
Most institutions purchase AI capabilities rather than developing them in-house. FINMA treats this as outsourcing within the meaning of Circular 2018/3 “Outsourcing”.
Specifically:
- Every external AI provider must be recorded in the outsourcing inventory.
- For material outsourcing: due diligence, contractual audit rights, exit strategy.
- Cloud hosting of AI models falls under FINMA’s cloud guidelines.
- Special care with foreign providers: where is data processed? Does the US CLOUD Act apply? Conflict with FADP Art. 16-17 (data export)?
What is often overlooked: An API connection to an AI service (e.g., a scoring model via REST API) is also an outsourcing if the result feeds into a supervised business process.
5. Transparency and Traceability
FINMA requires that AI decisions be traceable. This does not mean every algorithm must be explainable (Explainable AI in the academic sense). It means:
- Towards customers: If an AI system was involved in a decision affecting the customer (credit rejection, premium increase), the customer must be informed. The decision basis must be possible to explain comprehensibly.
- Towards the supervisor: FINMA must be able to understand how the system reaches its result. Not at the level of individual weights, but in the logic of the approach.
- Internally: Management must be able to understand and assess the risks of deployed AI systems without being data scientists themselves.
Deadlines and Transitional Provisions
FINMA has not set a hard transition deadline for Circular 08/2024. But supervisory practice shows:
- At the next on-site examination, the topic of AI governance will be raised.
- Institutions that cannot present an inventory and governance structure by then will receive conditions.
- For systemically important institutions (Category 1 and 2), expectations are higher and timelines shorter.
Recommendation: The AI inventory should be in place by Q3 2026. The governance framework for all high-risk systems by Q4 2026. This is ambitious but realistic.
Overlap with the EU AI Act
For institutions with EU business, the EU AI Act (Regulation 2024/1689) applies in addition. The FINMA requirements and EU requirements overlap but are not identical:
| Topic | FINMA 08/2024 | EU AI Act |
|---|---|---|
| AI inventory | Required | Required (Art. 26) |
| Risk assessment | Three-tier | Four-tier (prohibited, high, limited, minimal) |
| Transparency | Customer information for AI decisions | Art. 50: labelling obligation |
| Bias review | Recommended | Art. 10: mandatory for high-risk |
| Documentation | Comprehensive | Art. 11: technical documentation |
| Sanctions | Supervisory measures | Up to EUR 35 million or 7% of turnover |
An institution that must comply with both should design the governance process to cover both. Avoid duplication. The FINMA requirements are in most respects a subset of the EU requirements. Those who are EU-compliant generally also meet FINMA expectations.
Where Most Institutions Stand
As of March 2026, based on publicly available information and industry conversations:
- Major banks: Have dedicated AI governance teams. AI inventories exist but are often incomplete.
- Cantonal banks: Mostly no dedicated AI governance. AI usage occurs but without systematic capture.
- Independent asset managers: AI governance exists in very few cases. FINMA expectations are new territory for many.
- Insurers: Heterogeneous. Large insurers have governance structures. Smaller ones do not.
This is not a reproach. Until a year ago, there was no circular explicitly requiring AI governance. Now there is.
Practical Steps: From Zero to FINMA-Compliant Setup
Weeks 1-2: Inventory Capture all AI systems. Not just the obvious ones. Survey every department. Check external APIs. Identify AI embedded in purchased software.
Weeks 3-4: Classification Rate every system according to the FINMA risk model. Mark high-risk systems.
Weeks 5-6: Governance Framework Define processes: who approves new AI systems? Who validates existing ones? How is documentation maintained? What checklist is used?
Weeks 7-8: High-Risk Implementation For each high-risk system: validation report, bias review, data protection impact assessment, management sign-off.
Ongoing: Monitoring Keep the inventory current. Monitor systems. Document incidents. Schedule annual revalidation.
The Data Foundation
The biggest challenge is not the process. It is the knowledge. Compliance officers need to know what FINMA concretely expects, which circulars apply, and how supervisory practice is evolving.
Our platform covers all 27 FINMA tables: circulars, enforcement decisions, warnings, FAQs, guidance papers. Updated nightly. Source-verified. Searchable.
If FINMA Guidance 08/2024 is amended or supplemented, you will learn about it through our WatchTower alerts on the same day.
This article is for informational purposes and does not constitute legal advice. For implementing regulatory requirements, we recommend consulting qualified professionals.