Back to Insights

Reasonable foreseeability in AI harm assessment

Abstract

Regulators and courts are converging on a "reasonable foreseeability" standard for evaluating whether deploying organisations adequately anticipated potential harms prior to deployment. This paper examines how the standard is applied across regulatory domains, what it requires of deploying organisations in terms of pre-deployment harm assessment, and how it differs from both speculative harm analysis and hindsight-based harm evaluation.

The Reasonable Foreseeability Standard

The reasonable foreseeability standard originated in tort and administrative law as a mechanism for determining what harms organisations should have anticipated prior to their actions. Applied to AI deployment, the standard asks: what harms should a competent deploying organisation have identified before deploying this system?

This standard is more demanding than asking what harms the deploying organisation actually identified. It asks what harms a reasonable organisation should have identified. It is less demanding than asking organisations to anticipate all conceivable harms, however unlikely. It sits at a middle point: harms that are not obvious but which a competent assessor would likely surface through deliberate evaluation.

The standard is forward-looking. It does not penalise organisations for failing to anticipate harms that only became foreseeable in hindsight through events that occurred post-deployment. It does penalise organisations for deploying without conducting adequate pre-deployment assessment even if no harms materialised.

Application Across Regulatory Domains

The reasonable foreseeability standard appears consistently across regulatory domains despite differences in sector-specific regulation. In employment discrimination law, regulators ask whether deploying organisations should have foreseen that their hiring models could exhibit disparate impact. In credit regulation, regulators ask whether deploying organisations should have foreseen lending model biases. In healthcare, regulators ask whether deploying organisations should have foreseen diagnostic model failures in underrepresented patient populations.

In each case, the analysis focuses on whether reasonable pre-deployment assessment would have surfaced the harm category. If the harm represents a known risk in the sector (e.g., bias in employment decisions, model underperformance in underrepresented populations), regulators expect deploying organisations to have conducted specific pre-deployment assessment of that risk, regardless of whether they were contractually required to do so.

The Role of Sectoral Knowledge

Application of the reasonable foreseeability standard depends heavily on sectoral knowledge. What is foreseeable depends on what competent practitioners in the sector know about risks. If healthcare practitioners know that AI diagnostic models underperform in certain patient populations, regulators will expect deploying healthcare organisations to assess that risk. If credit practitioners know that AI lending models can embed historical discrimination, regulators will expect deploying credit organisations to assess that risk.

This creates an asymmetry between organisations deploying in well-developed AI-regulated sectors versus organisations deploying in emerging contexts. In regulated sectors, extensive case law and regulatory guidance make foreseeable harms explicit. Deploying organisations can reference specific documented risks they should have assessed. In emerging sectors, the boundary between foreseeable and unforeseeable harms is less clear, creating uncertainty but also potentially lower expectations.

However, regulators increasingly expect organisations to develop internal sectoral knowledge regarding AI risks. An organisation cannot claim that a harm was unforeseeable if that harm is documented in academic literature or disclosed in public case studies by peer organisations. The reasonableness standard requires proportional effort to understand risks documented in the sector.

Scope of Required Assessment

The reasonable foreseeability standard does not require deploying organisations to conduct comprehensive assessment of all conceivable harms. Rather, it requires targeted assessment of harms that are:

(1) Documented in the sector as risks associated with similar systems, (2) Relevant to the deploying organisation's system based on its application domain and exposed populations, (3) Identifiable through assessment methodologies that are established and available to the deploying organisation, (4) Potentially material based on the system's decision-making authority and exposure scale.

An organisation deploying an employment decision system should assess bias in hiring recommendations because employment discrimination is well-documented as an AI risk. The same organisation deploying an internal resource scheduling system may face lower expectations regarding hiring-specific harms because the system does not directly influence employment decisions.

Documentation and Defensibility

A deployment decision is defensible under the reasonable foreseeability standard if the deploying organisation can document that it:

(1) Identified material harm categories relevant to the system and sector, (2) Conducted pre-deployment assessment of those harm categories using appropriate methodologies, (3) Evaluated whether identified harms met thresholds requiring control measures or deployment restrictions, (4) Implemented controls proportional to identified risks, (5) Documented this assessment process as evidence of deliberate pre-deployment reasoning.

If post-deployment a harm emerges that the organisation did not anticipate, regulatory investigation will focus on whether reasonable pre-deployment assessment should have identified that harm category. If the harm is documented in sector literature or disclosed in case studies available at the time of deployment, the burden shifts to the deploying organisation to explain why its assessment did not surface the foreseeable harm.

Implications for Pre-Deployment Practice

The reasonable foreseeability standard creates requirements for deploying organisations to conduct deliberate, documented pre-deployment harm assessment. This assessment must be proportional to the identified risks but also credible — it cannot be pro forma documentation created to satisfy compliance requirements while actual decision-making proceeds independently.

The standard also creates incentives for organisations to develop sectoral knowledge regarding AI risks. An organisation that participates in industry consortia, reviews regulatory guidance, and studies case studies from peer organisations can more credibly demonstrate that it identified and assessed foreseeable harms. An organisation that claims ignorance of documented sectoral risks faces difficult positions if post-deployment investigation occurs.

Additionally, the standard creates incentives for transparent harm identification. If an organisation identifies a foreseeable harm during pre-deployment assessment and concludes it can manage that harm through controls, it should document that conclusion and the reasoning supporting it. If instead the organisation suppresses harm identification or treats assessment results as internal only, regulatory investigation will view that practice negatively.