In June 2024, the European Union formally adopted the world’s first comprehensive law to regulate artificial intelligence - the EU AI Act. This sweeping legislation aims to create a legal framework for trustworthy AI, promoting innovation while mitigating risks to human rights, safety, and democratic values. As financial institutions, technology firms, and policymakers digest its implications, a broader question emerges: how does the EU’s regulatory stance compare with that of other major economies like the United States, the United Kingdom, and China?
This article explores the core provisions of the EU AI Act, its anticipated impact on financial services, timelines for implementation, and how it diverges from regulatory trends in other key markets.
The EU AI Act classifies AI systems based on risk levels, ranging from unacceptable and high-risk to limited and minimal-risk use cases:
A key innovation in the Act is classifying general-purpose AI models (GPAIs), like GPT-4 or Claude, as “systemic” if they pose high risk due to scale and potential societal impact. Systemic GPAIs are subject to robust documentation, model evaluations, risk mitigation plans, and reporting requirements.
Financial institutions operating in the EU must review all AI use cases, especially in credit assessments, KYC/AML systems, and fraud detection, and align them with the Act’s governance, transparency, and human-in-the-loop mandates.
The AI Act entered into force in July 2024, triggering a phased rollout. Below are the major implementation milestones:
Date | Milestone |
---|---|
July 2024 | EU AI Act enters into force (20 days after publication in the Official Journal). |
End-2024 | Prohibited AI systems (e.g., social scoring, manipulative techniques) are banned. |
Mid-2025 | Obligations for general-purpose AI models (e.g., transparency, documentation) begin to apply. |
Mid-2026 | Core provisions for high-risk AI systems become enforceable across sectors. |
2027 onwards | Continuous oversight, audits, and penalties by national authorities and the EU AI Office. |
The EU has also established an AI Office within the European Commission to coordinate enforcement and offer guidance. Companies have 6 to 24 months, depending on their AI system classification, to achieve compliance.
Unlike sector-specific rules like PSD2 for payments or MiFID II for markets, the EU AI Act is sector-agnostic. It focuses on use cases, not the industry deploying them. In financial services, a fraud detection tool using biometric risk scoring or a credit approval engine using unexplainable AI models may fall under the “high-risk” category.
To comply, banks and insurers must:
Critically, the Act’s interoperability with existing EU laws, like GDPR, DORA, and the Digital Services Act, is essential for coherent governance. Firms must integrate compliance teams across legal, risk, and IT to manage overlapping obligations.
a) United States: Sectoral, Principles-Based, and Market-Led
The US lacks a centralized AI law. Instead, its approach is decentralized and reactive, with sectoral regulators like the FTC, SEC, CFPB, and NIST issuing guidance or enforcement actions.
Key developments include:
Unlike the EU’s prescriptive obligations, the US relies on principles-based self-regulation, encouraging innovation while addressing harm post-factum. Critics argue this approach lags behind in protecting civil liberties, but supporters claim it avoids overregulation.
b) United Kingdom: Agile, Pro-Innovation, and Regulator-Led
Post-Brexit, the UK has charted a light-touch but principles-based AI governance strategy. The 2023 white paper, “A pro-innovation approach to AI regulation”, sets out five cross-sector principles:
Rather than enacting a horizontal law, the UK empowers existing regulators (like the FCA for financial services) to apply these principles to sectoral AI use cases.
This approach prioritizes regulatory agility and proportionality, but lacks the enforceability of the EU Act. A 2024 consultation may yield new coordination mechanisms, but not a binding AI Act.
c) China: State-Centric and Security-Focused
China’s AI governance combines centralized control, national security, and ideological alignment. Its framework includes:
In finance, Chinese regulators focus heavily on algorithmic accountability in online lending, robo-advisory, and insurance pricing.
China’s approach is compliance-heavy, state-led, and politically aligned. It has strong enforcement tools but limited transparency for external observers.
The divergence in global AI regulation poses a risk of regulatory fragmentation for multinational firms. Compliance costs could escalate if AI systems must be redesigned to meet multiple, sometimes conflicting, regimes.
Some convergence is emerging:
Still, without a global consensus, firms must adopt a “compliance-by-design” approach, embedding AI risk controls into the development lifecycle across jurisdictions.
AI governance is no longer optional for banks, insurers, and asset managers. The EU AI Act’s entry into force by 2026, along with increasing pressure from national regulators and ESG-conscious investors, compels institutions to:
Ultimately, compliance can evolve from a burden into a competitive differentiator, a sign of operational maturity, ethical leadership, and digital trust.
The EU AI Act represents a bold attempt to shape the future of AI governance through democratic oversight and rights-based values. While other jurisdictions adopt looser, innovation-first models, the EU’s leadership will likely influence global norms, especially for firms seeking access to the European market.
Financial services firms must now navigate a complex terrain of cross-border compliance, reputational scrutiny, and technological innovation. Those who succeed will treat AI governance as a regulatory obligation and a pillar of trustworthy finance in the algorithmic age.
Need help? Connect with our BFS experts to learn about the EU AI Act, its key provisions, impact on financial services, implementation timeline, and how it differs from global regulations.