Blogs

Governing the Algorithms: The EU AI Act and Global Regulatory Divergence

Written by Sanjiv Roy | Jul 1, 2025 3:17:40 PM

In June 2024, the European Union formally adopted the world’s first comprehensive law to regulate artificial intelligence - the EU AI Act. This sweeping legislation aims to create a legal framework for trustworthy AI, promoting innovation while mitigating risks to human rights, safety, and democratic values. As financial institutions, technology firms, and policymakers digest its implications, a broader question emerges: how does the EU’s regulatory stance compare with that of other major economies like the United States, the United Kingdom, and China?

This article explores the core provisions of the EU AI Act, its anticipated impact on financial services, timelines for implementation, and how it diverges from regulatory trends in other key markets.

The EU AI Act: A Risk-Based Blueprint

The EU AI Act classifies AI systems based on risk levels, ranging from unacceptable and high-risk to limited and minimal-risk use cases:

  • Unacceptable risk AI systems (e.g., social scoring or real-time facial recognition in public spaces) are banned.
  • High-risk systems (used in areas like credit scoring, hiring, law enforcement, and healthcare) must comply with stringent requirements around transparency, data quality, human oversight, and cybersecurity.
  • Limited-risk systems (e.g., chatbots) require user disclosures.
  • Minimal-risk systems (e.g., AI-enabled spam filters) are largely unregulated.

A key innovation in the Act is classifying general-purpose AI models (GPAIs), like GPT-4 or Claude, as “systemic” if they pose high risk due to scale and potential societal impact. Systemic GPAIs are subject to robust documentation, model evaluations, risk mitigation plans, and reporting requirements.

Financial institutions operating in the EU must review all AI use cases, especially in credit assessments, KYC/AML systems, and fraud detection, and align them with the Act’s governance, transparency, and human-in-the-loop mandates.

Key Timelines for the EU AI Act

The AI Act entered into force in July 2024, triggering a phased rollout. Below are the major implementation milestones:

Date Milestone
July 2024 EU AI Act enters into force (20 days after publication in the Official Journal).
End-2024 Prohibited AI systems (e.g., social scoring, manipulative techniques) are banned.
Mid-2025 Obligations for general-purpose AI models (e.g., transparency, documentation) begin to apply.
Mid-2026 Core provisions for high-risk AI systems become enforceable across sectors.
2027 onwards Continuous oversight, audits, and penalties by national authorities and the EU AI Office.

The EU has also established an AI Office within the European Commission to coordinate enforcement and offer guidance. Companies have 6 to 24 months, depending on their AI system classification, to achieve compliance.

Financial Services: Regulated by Use Case, Not Sector

Unlike sector-specific rules like PSD2 for payments or MiFID II for markets, the EU AI Act is sector-agnostic. It focuses on use cases, not the industry deploying them. In financial services, a fraud detection tool using biometric risk scoring or a credit approval engine using unexplainable AI models may fall under the “high-risk” category.

To comply, banks and insurers must:

  • Conduct AI impact assessments before deployment.
  • Ensure explainability and auditability of models.
  • Maintain technical documentation for inspections.
  • Implement human oversight mechanisms.

Critically, the Act’s interoperability with existing EU laws, like GDPR, DORA, and the Digital Services Act, is essential for coherent governance. Firms must integrate compliance teams across legal, risk, and IT to manage overlapping obligations.

Global Contrasts: US, UK, China Take Divergent Paths

a) United States: Sectoral, Principles-Based, and Market-Led

The US lacks a centralized AI law. Instead, its approach is decentralized and reactive, with sectoral regulators like the FTC, SEC, CFPB, and NIST issuing guidance or enforcement actions.

Key developments include:

  • Executive Order on AI (Oct 2023) mandates safety and civil rights considerations in federal AI deployments.
  • NIST AI Risk Management Framework (2023) provides voluntary governance standards for trustworthy AI.
  • Financial regulators like the OCC and Federal Reserve are assessing AI-related risks but have not yet mandated specific compliance frameworks.

Unlike the EU’s prescriptive obligations, the US relies on principles-based self-regulation, encouraging innovation while addressing harm post-factum. Critics argue this approach lags behind in protecting civil liberties, but supporters claim it avoids overregulation.

b) United Kingdom: Agile, Pro-Innovation, and Regulator-Led

Post-Brexit, the UK has charted a light-touch but principles-based AI governance strategy. The 2023 white paper, “A pro-innovation approach to AI regulation”, sets out five cross-sector principles:

  1. Safety, security, and robustness.
  2. Transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

Rather than enacting a horizontal law, the UK empowers existing regulators (like the FCA for financial services) to apply these principles to sectoral AI use cases.

This approach prioritizes regulatory agility and proportionality, but lacks the enforceability of the EU Act. A 2024 consultation may yield new coordination mechanisms, but not a binding AI Act.

c) China: State-Centric and Security-Focused

China’s AI governance combines centralized control, national security, and ideological alignment. Its framework includes:

  • Algorithm Regulation (2022) requires transparency and oversight for recommendation systems.
  • Generative AI Provisions (2023) mandate content moderation, watermarking, and identity verification.
  • Data Security Law and Personal Information Protection Law (PIPL) ensure data localization and privacy.

In finance, Chinese regulators focus heavily on algorithmic accountability in online lending, robo-advisory, and insurance pricing.

China’s approach is compliance-heavy, state-led, and politically aligned. It has strong enforcement tools but limited transparency for external observers.

Interoperability and the Risk of Regulatory Fragmentation

The divergence in global AI regulation poses a risk of regulatory fragmentation for multinational firms. Compliance costs could escalate if AI systems must be redesigned to meet multiple, sometimes conflicting, regimes.

Some convergence is emerging:

  • OECD, G7, and G20 have published AI principles aligning on fairness, transparency, and safety.
  • The EU-US Trade and Technology Council (TTC) is working toward shared definitions and standards.
  • ISO/IEC and IEEE are developing technical standards to harmonize model testing and risk management.

Still, without a global consensus, firms must adopt a “compliance-by-design” approach, embedding AI risk controls into the development lifecycle across jurisdictions.

Strategic Implications for Financial Institutions

AI governance is no longer optional for banks, insurers, and asset managers. The EU AI Act’s entry into force by 2026, along with increasing pressure from national regulators and ESG-conscious investors, compels institutions to:

  • Map AI use cases and assess regulatory exposure.
  • Set up AI governance boards and internal registries.
  • Invest in model explainability, bias detection, and data lineage tools.
  • Integrate AI compliance with model risk management (MRM) and operational resilience frameworks.
  • Collaborate with regtech providers and industry consortia to share best practices.

Ultimately, compliance can evolve from a burden into a competitive differentiator, a sign of operational maturity, ethical leadership, and digital trust.

Conclusion: The New Regulatory Battlefield

The EU AI Act represents a bold attempt to shape the future of AI governance through democratic oversight and rights-based values. While other jurisdictions adopt looser, innovation-first models, the EU’s leadership will likely influence global norms, especially for firms seeking access to the European market.

Financial services firms must now navigate a complex terrain of cross-border compliance, reputational scrutiny, and technological innovation. Those who succeed will treat AI governance as a regulatory obligation and a pillar of trustworthy finance in the algorithmic age.

Need help? Connect with our BFS experts to learn about the EU AI Act, its key provisions, impact on financial services, implementation timeline, and how it differs from global regulations.