Cross-jurisdictional divergence in AI governance expectations
Abstract
AI governance regulatory frameworks are diverging across major jurisdictions, creating compounding compliance obligations for multinational organisations. The EU, United Kingdom, United States, and Asia-Pacific regions are developing distinct approaches to AI risk classification, pre-deployment assessment, documentation requirements, and accountability structures. This paper maps these divergences and examines how multinational organisations must navigate irreconcilable compliance requirements.
Jurisdictional Frameworks
The EU AI Act establishes a tiered risk framework with specific high-risk categories, mandatory conformity assessment, and documented decision requirements. The framework is prescriptive — it identifies which system types are high-risk and mandates specific assessment approaches for those systems.
The United Kingdom has adopted a broadly similar risk-based framework post-Brexit but with regulatory flexibility regarding implementation. Rather than prescriptive categorisation of high-risk systems, the UK framework emphasises principles-based assessment allowing regulators discretion in defining high-risk applications.
The United States has not enacted comprehensive AI regulation at the federal level. Instead, sector-specific regulation (employment, credit, healthcare) incorporates AI-specific language into existing regulatory frameworks. This creates patchwork regulation where compliance requirements vary significantly by application domain and regulated sector.
Asia-Pacific jurisdictions are developing distinct approaches. China emphasises sector-specific regulation with particular focus on content moderation and public opinion influence. Singapore and other Southeast Asian jurisdictions are developing AI governance frameworks aligned with risk-based approaches but adapted to regional priorities.
Risk Classification Divergence
The most fundamental divergence concerns what systems are subject to high-risk governance. The EU AI Act defines specific high-risk categories including systems affecting critical infrastructure safety, employment, education, and law enforcement. An employment decision system is presumptively high-risk under the EU framework.
Under US employment law, the same system is subject to anti-discrimination requirements but not necessarily to pre-deployment AI assessment requirements unless state-level employment regulation (e.g., New York City employment algorithm auditing rules) applies. The system may be subject to Equal Employment Opportunity Commission oversight if discrimination concerns emerge, but there is no mandatory pre-deployment conformity assessment.
The UK framework treats employment systems as potentially high-risk but provides regulatory discretion regarding whether conformity assessment is mandatory or advisory. Singapore treats employment decision systems as requiring responsible AI assessment but has not yet established mandatory pre-deployment assessment as a regulatory requirement.
These divergences mean a multinational organisation cannot develop a single compliance approach. A system compliant with EU mandatory pre-deployment assessment may face no equivalent requirement in the US, creating asymmetric compliance investment. Conversely, a system compliant with US employment discrimination law may be inadequate under the UK or EU frameworks.
Assessment Methodology Divergence
Jurisdictions differ in what assessment methodologies they recognise as adequate. The EU AI Act references specific technical standards and conformity assessment procedures. Organisations can reference harmonised standards and notified bodies to demonstrate compliance.
US employment law does not mandate specific assessment methodologies. Rather, it requires that employment practices not have disparate impact. An organisation can demonstrate compliance through various methodologies (disparate impact analysis, differential validity analysis, fairness metrics) but regulatory guidance is limited regarding which approaches regulators will find credible.
UK regulation encourages assessment but provides limited guidance regarding what methodologies regulators expect. Singapore has published responsible AI frameworks but has not established mandatory assessment methodologies. This flexibility allows organisations to tailor assessment approaches but creates uncertainty regarding what regulators will find adequate.
Documentation Requirements
The EU AI Act mandates comprehensive technical documentation, risk assessment documentation, and documented decision rationale as conditions of deployment. Documentation requirements are explicit and extensive. The EU regime recognises that regulatory oversight requires evidence of pre-deployment reasoning.
The UK framework encourages documentation but does not mandate it at the scope or specificity of the EU requirement. US regulation similarly encourages documentation but does not mandate it as a condition of deployment. An organisation can deploy a system without documented pre-deployment assessment as long as the system meets anti-discrimination and sector-specific requirements.
This creates a compliance puzzle. An organisation can deploy a system in the US without extensive pre-deployment documentation. The same system deployed in the EU must be accompanied by comprehensive documentation. An organisation operating in both jurisdictions faces a choice: maintain distinct documentation practices by jurisdiction, or develop comprehensive documentation sufficient for the most demanding jurisdiction and apply it globally.
Enforcement and Accountability Structures
The EU AI Act creates explicit enforcement authority for AI regulators within member states. Regulators can conduct audits, mandate documentation, and impose penalties for non-compliance. This creates direct accountability to regulators for pre-deployment assessment practices.
The US framework distributes enforcement across sector regulators (Federal Trade Commission for consumer protection, Equal Employment Opportunity Commission for employment, Consumer Financial Protection Bureau for credit). These regulators have enforcement authority but often only exercise it in response to complaints rather than proactively. Pre-deployment assessment practices are not subjects of routine enforcement.
The UK framework is still being clarified regarding which regulator will have primary AI governance authority. Singapore has established a responsible AI institute with advisory authority but limited enforcement capacity. These differences mean that even if an organisation commits to high-standard pre-deployment assessment globally, the risk profile for non-compliance differs significantly across jurisdictions.
Implications for Multinational Organisations
Multinational organisations deploying AI systems must navigate four compounding compliance challenges:
First, they must determine whether a given system is subject to high-risk governance under each jurisdiction's framework. An employment system may be high-risk in the EU, moderately regulated in the UK and US, and lightly regulated in Singapore. This requires jurisdiction-specific legal analysis.
Second, they must identify what assessment methodologies each jurisdiction expects. There is no single "sufficient" assessment approach. Methodologies credible in the EU may be insufficient under US anti-discrimination standards, and vice versa.
Third, they must determine documentation standards for each jurisdiction. Creating distinct documentation practices across jurisdictions creates compliance complexity and increases the risk of gaps. Creating comprehensive global documentation sufficient for the most demanding jurisdiction adds cost and timeline.
Fourth, they must assess enforcement risk across jurisdictions. Some jurisdictions have strong enforcement capacity; others have limited enforcement. Risk management depends not only on regulatory requirements but also on realistic assessment of enforcement likelihood in each jurisdiction.
Regulatory Convergence Prospects
Regulatory convergence is possible but not assured. The EU's risk-based framework is influencing regulatory development in other jurisdictions. The UK's alignment with EU principles suggests possible eventual regulatory harmonisation within Europe, though UK regulatory autonomy allows divergence.
The US regulatory approach remains sector-specific rather than AI-specific, making broad convergence unlikely without federal legislation. Asia-Pacific jurisdictions are developing distinct approaches reflecting regional priorities. Rather than global harmonisation, we are more likely to see regional clusters of aligned frameworks (EU and potentially UK and other European jurisdictions; US with potential influence on other Western jurisdictions; China with influence on other Asian jurisdictions).
Multinational organisations must plan for sustained jurisdictional divergence. Short-term convergence is unlikely. Organisations should develop governance structures capable of managing jurisdiction-specific compliance requirements while maintaining core assessment standards globally.