Skip to content
← Back to Blog

Guides

Regulatory Frameworks for AI Agents: A Reference Guide

CompleteFlow |

Deploying AI agents in regulated industries means operating under multiple overlapping regulatory frameworks. This reference maps the key regulations to their practical requirements for AI agent deployments across financial services, legal, healthcare, and insurance.

Data protection: GDPR and UK GDPR

The General Data Protection Regulation remains the foundational constraint for any AI system processing personal data in the UK or EU. For AI agents, several provisions have direct technical implications.

Article 22 restricts automated decision-making that produces legal or similarly significant effects. An AI agent that triages insurance claims, assesses creditworthiness, or routes legal matters is likely caught by this provision. In practice: human review must be available for decisions that materially affect individuals, and data subjects must be informed that automated processing is taking place.

Article 35 requires Data Protection Impact Assessments for high-risk processing. Most AI agent deployments in regulated industries qualify. The DPIA must document the processing purpose, necessity, proportionality, and risks to data subjects before the system goes live.

Data minimisation (Article 5(1)(c)) limits what data an agent can ingest. Feeding entire document repositories into an AI system to “see what it finds” is not compliant. Agents need defined data scopes tied to specific processing purposes.

Cross-border transfer restrictions (Chapter V) affect model hosting decisions directly. If an AI agent processes UK personal data, the model infrastructure must either sit within an adequate jurisdiction or be covered by appropriate safeguards. This is why private deployment within UK data centres has become the default for regulated deployments.

Financial services: FCA and PRA

FCA Consumer Duty

The FCA’s Consumer Duty (PS22/9), effective since July 2023, requires firms to deliver good outcomes for retail customers. For AI agents handling customer-facing processes in financial services, this creates three requirements:

  • Outcome monitoring. Firms must track whether AI-driven processes produce fair outcomes across different customer groups. An agent that systematically undertreats claims from certain demographics creates a Consumer Duty breach regardless of intent.
  • Explainability. If a customer asks why a decision was made, the firm must explain it in terms the customer can understand. “The model output a low confidence score” does not meet this bar.
  • Governance. Under the Senior Managers and Certification Regime (SM&CR), a named individual is accountable for the AI system’s outputs. This requires audit trails that trace from agent decision back to data inputs, model version, and applied rules.

Operational resilience (SS1/21 and SS2/21)

The PRA and FCA’s operational resilience framework requires firms to identify important business services and set impact tolerances. AI agents that form part of critical business processes (claims handling, trade execution, client onboarding) fall within scope. Firms must demonstrate they can operate within impact tolerances even when the AI system fails. That means fallback procedures and continuity planning.

MiFID II

For investment firms, MiFID II adds further requirements:

  • Best execution. If AI agents contribute to trade execution decisions, the firm must demonstrate best execution was achieved. This requires granular logging of the agent’s decision process.
  • Suitability. Agents involved in investment advice or portfolio management must assess suitability for each client. The assessment and its reasoning must be recorded.
  • Record keeping. Article 16(6) requires retention of records sufficient to demonstrate compliance. For AI agents, this means preserving inputs, outputs, the model version, configuration, and decision logic at the time of each action.

Insurance: Lloyd’s and London Market

Lloyd’s market participants operate under FCA regulation but face additional oversight through Lloyd’s own requirements.

Lloyd’s Minimum Standards require managing agents to maintain adequate systems and controls. AI agents processing submissions, binding risks, or handling claims must be incorporated into the firm’s control framework with appropriate oversight.

Bordereau reporting to Lloyd’s requires accurate, timely data. AI agents automating bordereaux processing must produce outputs that meet Lloyd’s data standards, with validation checks and error handling that satisfies managing agent obligations.

Delegated authority frameworks add complexity. Where coverholders use AI agents in the binding process, the managing agent remains responsible for oversight. This creates a governance chain that must be documented and auditable.

SRA Standards and Regulations

The Solicitors Regulation Authority requires law firms to maintain competence in their use of technology (SRA Competence Statement). For AI agents:

  • Confidentiality. Client data processed by AI agents remains subject to legal professional privilege and the duty of confidentiality. Data must not be transmitted to third-party servers or used for model training.
  • Supervision. The SRA requires that work delegated to others (including AI systems) is properly supervised. A solicitor remains responsible for the accuracy and quality of AI-generated outputs.
  • Client disclosure. Firms should consider whether use of AI in delivering legal services should be disclosed to clients, particularly where it materially affects how work is performed.

Professional indemnity

Law firms using AI agents for substantive legal work should review their professional indemnity insurance terms. Some policies exclude or limit cover for losses arising from automated systems. The risk allocation between firm, AI vendor, and insurer needs explicit attention.

Healthcare: NHS and MHRA

AI agents in healthcare settings face regulation from multiple directions.

The UK Medical Devices Regulation 2002 (as amended) may classify AI software as a medical device if it is intended for diagnosis, prevention, monitoring, or treatment of disease. AI agents that triage patient communications or analyse clinical data likely fall within scope and require MHRA registration.

The NHS Data Security and Protection Toolkit sets baseline requirements for any organisation processing NHS data. AI agents must operate within the certified environment, with access controls, encryption, and audit logging meeting the toolkit standards.

Caldicott Principles govern the use of patient-identifiable data. AI agents processing health data must have a defined purpose, use the minimum necessary data, and operate under appropriate access controls.

The EU AI Act

The EU AI Act, which began phased application in 2024, introduces a risk-based classification system. Most AI agent deployments in regulated industries fall into the high-risk category under Annex III, which covers AI systems used in:

  • Credit scoring and creditworthiness assessment
  • Insurance pricing and claims assessment
  • Employment and worker management
  • Access to essential services

High-risk AI systems must meet requirements including:

  • Risk management systems (Article 9): ongoing identification and mitigation of risks throughout the system lifecycle.
  • Data governance (Article 10): training, validation, and testing datasets must meet quality criteria including completeness, statistical properties, and suitability for the intended purpose.
  • Technical documentation (Article 11): detailed documentation of the system’s design, development, and intended use.
  • Record keeping (Article 12): automatic logging of events throughout the system’s lifecycle to enable traceability.
  • Transparency (Article 13): systems must be designed to allow deployers to interpret outputs and use them appropriately.
  • Human oversight (Article 14): systems must allow effective oversight by natural persons, including the ability to override or reverse outputs.
  • Accuracy, robustness, and cybersecurity (Article 15): systems must achieve appropriate levels of accuracy and be resilient to errors and adversarial attacks.

Firms operating across the UK and EU must satisfy both UK and EU requirements. The UK has not adopted equivalent legislation, but the FCA and PRA have signalled that existing regulatory frameworks already capture most AI-specific risks.

What this means for architecture

These overlapping frameworks add up to a concrete set of architectural requirements:

Audit trails must capture every agent action, the data that informed it, the model and configuration that produced it, and any human review. Records must be immutable, timestamped, and retained for the period required by the applicable regulation (typically 5-7 years in financial services).

Human-in-the-loop controls must be configurable per process and per risk level. High-stakes decisions require human approval before execution. The system must route escalations and record the reviewer’s decision.

Data residency must be controllable. For most regulated deployments, this means private infrastructure within a specific jurisdiction. Data must not leave the designated boundary, including for model inference.

Model governance must track which model version produced which outputs. When models are updated, the change must be documented and the impact on existing processes assessed.

Access controls must enforce role-based permissions covering who can build agents, deploy them, review their outputs, and modify their configuration.

Explainability must be built into the output layer. Every agent decision must include a reasoning trace that can be presented to regulators, clients, or affected individuals in plain language.

None of this is optional. These are preconditions for deployment in any regulated environment. Systems designed without them from the start rarely achieve compliance through retrofitting.


Related reading:

Ready to deploy AI agents in your organisation?

Book a 30-minute strategy session to explore how CompleteFlow fits your workflow.

Book a Call