Back to Insights

EU AI Act risk classification: implications for pre-deployment obligations

Abstract

The EU AI Act's tiered risk framework introduces a fundamental shift in how organisations must evaluate AI deployment decisions. Rather than focusing solely on technical performance metrics, the Act mandates that high-risk system deployers provide comprehensive documentation of decision rationale prior to deployment. This paper examines how the Act's risk classification categories (prohibited, high-risk, limited-risk, minimal-risk) translate into specific pre-deployment obligations that exceed standard AI governance practice.

Regulatory Framework Overview

The EU AI Act establishes a tiered approach to AI system regulation based on assessed risk level. Prohibited-risk systems (e.g., certain real-time biometric identification applications) cannot be deployed under any circumstances. High-risk systems face the most demanding compliance framework and include applications in critical infrastructure, employment, education, and law enforcement.

The Act defines high-risk systems not only by application domain but also by the nature of potential harms. An AI system that could cause material harm to fundamental rights or safety qualifies as high-risk regardless of sector. This definition captures systems outside traditionally regulated domains but posing equivalent harm potential.

Conformity Assessment Requirements

For high-risk systems, the Act mandates conformity assessment before deployment. This is not a post-deployment audit but a pre-deployment evaluation that must be completed and documented before the system becomes operational. The conformity assessment must address multiple dimensions simultaneously.

Technical documentation must evidence that the system meets the Act's requirements for accuracy, robustness, cybersecurity, and governance. This documentation cannot rely on theoretical claims about performance. Rather, it must demonstrate through empirical validation that the system meets defined standards under operational conditions.

Risk assessment documentation must identify foreseeable harms, evaluate their likelihood and severity, and demonstrate that mitigation measures are adequate. This moves beyond generic risk frameworks to demand specific identification of harms the deploying organisation could cause through this system.

Decision Rationale Documentation

The Act's requirement for documented decision rationale represents a departure from standard AI governance practice. Many current AI initiatives document technical performance and business justification but do not systematically document the reasoning by which decision-makers concluded deployment was defensible.

Under the Act, this gap becomes unacceptable. Organisations must document: (1) what decision-makers understood about the system's capabilities and limitations, (2) what risks were identified and how they were evaluated, (3) what controls or mitigations were deemed necessary, (4) what assumptions about the deployment context underpin the decision to proceed, (5) what conditions trigger reassessment or pause of operations.

This documentation serves two functions simultaneously. Internally, it requires deliberate pre-commitment reasoning that can surface flawed assumptions or overlooked risks. Externally, it provides evidence of institutional decision quality that regulators can evaluate if the deployment subsequently becomes subject to investigation or enforcement action.

Evidentiary Burden Shift

The Act effectively shifts the evidentiary burden from deploying organisations to evidence post-deployment problems, to a framework where organisations must affirmatively demonstrate pre-deployment compliance. This is a material change in how deployment decisions are evaluated.

Under the historical model, an organisation could deploy a system and argue that if problems emerge, they can be addressed through monitoring and correction. Under the Act, an organisation cannot make that argument. It must demonstrate before deployment that defined conformity standards have been met and that risk mitigation is adequate.

This shift affects deployment timelines and decision authority. Deploying organisations cannot relegate compliance assessment to post-deployment phases. Conformity assessment and decision documentation must be completed and approved before operational deployment occurs, adding months to many deployment timelines and requiring approval from governance structures with actual authority to halt deployment based on assessment findings.

Implications for Organisational Governance

The Act's conformity assessment framework creates mandatory governance touchpoints that many organisations lack. Compliance requires that someone with appropriate authority makes an affirmative decision based on documented assessment that deployment should proceed. This decision-maker must be in a position to understand the assessment, question the findings, and halt deployment if assessment reveals unacceptable risks.

In organisational structures where AI deployment is treated as a technical function with limited governance engagement, this requirement forces restructuring. Either existing governance bodies must develop competency in AI risk assessment, or new governance structures must be created to handle this decision authority. Either way, deployment timelines and decision authority change.

Additionally, the requirement for documented decision rationale means organisations must invest in pre-deployment assessment capacity that many currently lack. Data science teams can develop and validate models; they cannot reliably conduct organisational risk assessment and governance decision-making without institutional support structures.

Competitive Implications

The Act creates compliance asymmetries across jurisdictions. Organisations headquartered in or serving EU markets must conduct conformity assessment for high-risk systems. Organisations outside the EU regulatory scope face no equivalent requirement. This creates competitive advantage for actors willing to accept regulatory risk but competitive disadvantage for risk-averse actors.

However, regulatory convergence is likely. The EU's risk-based framework is being adopted or adapted by regulators in other jurisdictions. Organisations that invest in robust pre-deployment assessment and governance now gain capability advantage as other regulatory regimes impose similar requirements.