Skip to content
← Back to Blog

Industry Insights

Private AI Deployment for Law Firms: SRA Compliance, Privilege and Practical Architecture

CompleteFlow |

In February 2026, the SRA authorised LawFairy as a technology-led law firm operating in UK immigration law. The firm uses deterministic, rule-based AI rather than traditional practitioners to deliver eligibility assessments and application preparation. It is the latest signal that AI-driven legal services are no longer a fringe proposition. The regulator is granting authorisation to firms built around technology.

That should concentrate minds. Not because every firm needs to become LawFairy, but because the firms still debating whether to adopt AI are now competing with firms that have regulatory approval to deliver legal services through AI systems. The question is no longer whether to use AI. It is how to use it without compromising professional obligations.

For most law firms, the answer depends on three things: what the SRA actually requires, what happens to privilege when client data leaves the firm’s infrastructure, and what architecture satisfies both.

What the SRA actually requires

The SRA regulates on outcomes, not on technology choices. The SRA Standards and Regulations do not mention AI explicitly. They do not need to. The obligations that govern AI adoption already exist in the Principles and the Code of Conduct.

Principle 2 requires solicitors to act in a way that upholds public trust and confidence in the profession. Principle 6 requires the keeping of client affairs confidential. Rule 3.5 of the Code of Conduct makes supervisors personally accountable for work carried out through others, including through systems. Rule 6.3 prohibits disclosure of current and former client information unless legally required.

The SRA’s own Risk Outlook report on AI in the legal market makes the position explicit: “you will remain responsible and accountable for the outputs from AI you are using.” That accountability cannot be delegated to a technology provider, an IT department, or an outsourcing arrangement.

The report identifies three specific confidentiality threats from AI:

  1. Direct exposure from staff uploading client data to online AI systems
  2. Training data leakage where confidential data is transferred to a provider for model training
  3. Cross-case contamination where an AI system replicates confidential details from one matter in its response to another

The SRA further states that firms must provide “clear rules covering” AI use and that staff must be trained to distinguish between informal use of tools like ChatGPT and formally adopted systems. Where AI is involved in a client matter, the firm must be prepared to disclose that involvement.

The regulatory framework is outcomes-focused. That means there is no prescriptive checklist to follow and no safe harbour from ticking boxes. A firm must be able to demonstrate, at any point, that its use of AI satisfies the Principles. If the regulator asks how client confidentiality is maintained when data is processed by a third-party AI system, the answer needs to be specific, documented, and defensible.

The privilege problem

Legal professional privilege is the foundation of the solicitor-client relationship. It is a substantive right, not a procedural convenience. And it is remarkably easy to compromise through technology choices that nobody in the firm’s management thought to scrutinise.

When a law firm sends client data to a SaaS AI provider, that data typically leaves the firm’s infrastructure and is processed on servers controlled by a third party. If that provider is incorporated in the United States or processes data on US-hosted infrastructure, the data falls within the reach of FISA Section 702.

FISA Section 702 authorises US intelligence agencies to compel US-based service providers to disclose data belonging to non-US persons without a warrant. The compulsion comes with a gag order. The provider cannot notify the data subject, and in most cases cannot notify the customer. A UK law firm using a US-hosted AI tool would have no way of knowing whether privileged client material had been accessed under a FISA order.

This is not a theoretical scenario. It is the structural consequence of processing data on infrastructure subject to US jurisdiction. The Court of Justice of the European Union reached precisely this conclusion in Schrems II (Case C-311/18), which invalidated the EU-US Privacy Shield on the grounds that US surveillance law does not provide adequate protection for personal data transferred from the EU. The court found that Standard Contractual Clauses alone cannot compensate for a legal framework that permits mass surveillance of foreign communications.

The UK operates under its own adequacy regime post-Brexit, but the underlying analysis applies identically. UK GDPR Article 28 requires a data processing agreement with any processor of personal data that specifies the subject matter, duration, nature, purpose, and type of data processed. Most SaaS AI agreements do not meet this standard in any meaningful detail.

For privileged material, the exposure is worse than a data protection violation. Privilege can be waived inadvertently. If a firm transmits privileged material to a third-party system without adequate safeguards, and that material is accessed or disclosed, the privilege may be lost. The principle was examined in Mazur v Triathlon Association [2020] EWHC 2032 (QB), where the court considered whether privilege was waived through technological means. The case reinforced that firms bear the burden of maintaining privilege through their technology choices. A passive assumption that a vendor will protect confidentiality is not sufficient.

The practical implication is straightforward. A firm that processes privileged client material through a US-hosted AI service is accepting a risk that the SRA’s confidentiality obligations, the Data Protection Act 2018, and the common law of privilege all require it to mitigate. The mitigation is not a better contract with the vendor. The mitigation is keeping the data within infrastructure the firm controls.

The supervision gap

The SRA’s accountability framework creates a second problem with hosted AI services: supervision.

Rule 3.5 requires that solicitors who supervise or manage others remain accountable for the work carried out through them. The SRA has confirmed this extends to AI systems. A partner who relies on an AI tool to draft a contract clause, review a disclosure bundle, or summarise case law is accountable for the output. That accountability requires the ability to inspect, verify, and audit the system’s reasoning.

Most SaaS AI tools are opaque. The user submits a prompt and receives an output. There is no audit trail showing what the model considered, what context it retrieved, what confidence level it assigned, or what alternative outputs it discarded. The firm receives a result but cannot reconstruct the reasoning path. When the SRA asks how a piece of work was supervised, “the AI produced it and a solicitor reviewed the output” may not satisfy a regulator that expects documented oversight of the decision-making process.

The SRA’s Risk Outlook report advises firms: “Do not trust an AI system to judge its own accuracy.” That recommendation presupposes the firm has the ability to independently verify accuracy. With a hosted tool that provides no transparency into its reasoning, independent verification is limited to checking whether the output looks right. That is pattern matching, not supervision.

Architecture that satisfies both obligations

A private deployment architecture addresses the privilege and supervision problems simultaneously. The design principles are not complex, but they must be structural rather than bolted on.

Data boundary enforcement

The AI infrastructure sits within the firm’s own cloud tenancy or on-premises environment. Models run in containers within that boundary. Client data never leaves the firm’s infrastructure. No data is transmitted to external APIs, third-party model providers, or cloud services outside the firm’s control.

This eliminates the FISA Section 702 exposure entirely. There is no US-incorporated service provider to compel. There is no cross-border data transfer to assess under Schrems II. Privilege is maintained because the data remains within the same infrastructure boundary as the firm’s other confidential systems.

For UK-based firms, this typically means deployment in a UK cloud region (Azure UK South, AWS eu-west-2) with the firm holding its own encryption keys through Azure Key Vault or AWS KMS. The firm controls access. The firm controls retention. The firm controls deletion.

Confidence scoring and human review gates

AI agent outputs should carry a confidence score derived from the quality and relevance of retrieved context, the consistency of the model’s reasoning, and the degree of alignment with known precedent or firm knowledge. This score is not decorative. It determines what happens next.

High-confidence outputs on routine tasks (document classification, metadata extraction, standard clause identification) can proceed to a review queue. Low-confidence outputs, or outputs on matters involving judgement, are routed to mandatory human review before any action is taken.

This architecture maps directly to the SRA’s supervision requirements. The firm can demonstrate that every AI-assisted output was subject to a governance process calibrated to risk. The audit trail records the confidence score, the routing decision, the reviewer’s identity, and the reviewer’s determination. When a regulator asks how supervision was maintained, the firm produces the log.

Immutable audit trails

Every interaction with an AI agent is recorded: the input, the retrieved context, the model version, the reasoning steps, the output, and any human review decision. The log is append-only and tamper-evident. Records cannot be modified or deleted.

This satisfies the SRA’s requirement for documented accountability and provides the evidence base for responding to complaints, regulatory inquiries, or professional negligence claims. It also gives the firm’s management visibility into how AI tools are actually being used, which is the foundation of effective supervision under Rule 3.5.

Role-based access controls

Access to AI agents and the data they process is governed by the same permission model as the firm’s other systems. A trainee working on a corporate transaction cannot query an agent loaded with family law matter data. A paralegal can run document extraction but cannot approve a client-facing output. The controls mirror the firm’s existing information barriers and supervision hierarchies.

What this means in practice

A mid-sized law firm deploying AI agents on private infrastructure can:

  • Process privileged client material through AI systems without creating a cross-border data transfer or third-party access risk
  • Demonstrate to the SRA that every AI-assisted output was subject to documented supervision proportionate to the risk
  • Produce a complete audit trail for any matter, any agent interaction, any output, at any time
  • Maintain professional indemnity insurance coverage without the uncertainty of explaining US-hosted data processing to underwriters
  • Train staff with clear, enforceable rules about AI use backed by technical controls rather than policy documents alone

The SRA’s approach to technology is collaborative. The regulator has stated it wants to support innovation and has operated an innovation space for firms testing new approaches. But that support operates within the existing Principles. A firm that cannot demonstrate how its AI deployment satisfies Principles 2 and 6 will find the collaborative tone sharpens.

The competitive reality

The SRA’s authorisation of technology-led firms like LawFairy is not an isolated event. It reflects a regulatory position that AI in legal services is acceptable, provided the outcomes for clients are protected. Firms that delay AI adoption on the grounds that the regulatory position is unclear are misreading the situation. The position is clear. The Principles apply. The firm is accountable. The question is whether the firm’s architecture supports that accountability.

Private deployment is not the only way to use AI in a law firm. But for any work involving privileged material, confidential client data, or outputs that require documented supervision, it is the architecture that satisfies the SRA’s outcome-based framework without requiring the firm to accept risks that its professional obligations do not permit.


Ready to deploy AI agents in your organisation?

Book a 30-minute strategy session to explore how CompleteFlow fits your workflow.

Book a Call