As UK law firms embed AI tools into due diligence, a sharper regulatory question has surfaced: who answers when the system misses something material? A new Legal Futures blog post argues this is the Solicitors Regulation Authority’s next blind spot, focusing attention on the balance between innovation and accountability in high-stakes legal work. Partners now rely on machine-driven document review to speed up mergers, property deals and financing transactions. Yet clients still expect lawyers to spot the outliers, the hidden liabilities and the one clause that changes the risk calculus. The tension sits at the heart of professional standards: firms embrace efficiency, but they cannot outsource responsibility. That tension now tests the SRA’s outcome-focused regime and the profession’s risk controls in a fast-moving AI market.
The debate broke into view on 27 November 2025, when Legal Futures published the commentary highlighting the risk that AI-assisted due diligence may miss material issues. The post zeroed in on UK practice and the regulatory framework that governs solicitors and law firms in England and Wales.

AI due diligence moves from pilot to practice
Over the past few years, major firms and mid-market practices have rolled AI-driven document analysis into core due diligence tasks. Tools classify contracts, extract key terms, flag anomalies and triage large data rooms. Practice groups use them across M&A, real estate, banking and commercial matters. Firms report faster review cycles and more consistent first-pass outputs. Clients press for time and cost savings, and competitive tenders often assume technology-enabled delivery.
This shift changes workflows. Lawyers now set up models, tune prompts, define clause taxonomies and design sampling plans. Supervisors decide where to rely on machine outputs and where to escalate to human review. While technology improves speed, it also introduces new failure modes. A model might misread a bespoke indemnity or miss an obscure change-of-control trigger hidden in an appendix. When stakes run high, even one miss can have real consequences.
The risk that keeps partners awake: material omissions
Material omissions sit at the centre of the concern. AI models perform well on standard clauses and high-volume patterns. They struggle more with atypical drafting, poor scans, foreign language inserts or complex conditionality. Automation bias can creep in: reviewers may trust machine flags too much and spend less time on documents that the system marks as “clean.”
Law firms can mitigate this risk, but they must do it deliberately. Robust sampling plans, second-level review for high-risk categories, and escalation triggers for non-standard documents help. Firms also need validation data, not just vendor demos, to understand how a tool performs on their document mix. The more a firm measures precision and recall against representative samples, the better it can set thresholds and assign work.
Where the SRA stands today
The SRA regulates outcomes rather than specific technologies. Its Principles and Codes set expectations on competence, service quality, supervision, confidentiality and governance. The framework places responsibility on firms and individuals to deliver a proper standard of service, to supervise work effectively and to protect client information. It does not certify AI tools or approve models. That approach gives firms flexibility, but it also puts the burden on them to show they control the risks.
Under the SRA’s rules for firms, practices must maintain effective systems and controls. In the AI context, that means governance over model selection, testing, deployment, supervision and incident response. If a tool supports a core legal task, the supervising solicitor remains accountable for the work product. The choice to use technology does not dilute that duty.
Accountability and the chain of reliance
Responsibility runs through several links: the client, the firm, the supervising partner and the technology vendor. Engagement letters and scoping documents should make clear what the firm will do, what technology it will use and where limits apply. Many vendor contracts contain disclaimers and allocate risk back to the buyer. Without careful negotiation, those terms can leave the firm carrying the professional risk if the tool misses something.
Internally, partners need to map accountability. Who signs off on model use? Who sets acceptance criteria? Who records sampling rates and error findings? The clearer the accountability map, the easier it becomes to explain decisions if a dispute arises. When everyone knows the threshold for “human eyes only,” teams reduce the chance that reliance on AI turns into a negligence claim.
Data protection and confidentiality pressures
AI-assisted due diligence often involves personal data and sensitive commercial information. UK data protection law, enforced by the Information Commissioner’s Office, requires firms to identify lawful bases, ensure security and manage risk when they process personal data. Cross-border processing, vendor hosting and subcontractors can add complexity. Client confidentiality rules under the SRA framework also apply in full.
Firms can reduce exposure by using privacy-preserving configurations, controlling data flows, and restricting training uses by vendors. They can keep logs of what data goes where and why, and they can conduct data protection impact assessments when risk warrants. Where clients insist on particular security or residency requirements, firms must align tool configuration with those commitments.
Training, supervision and the audit trail
Competence remains a core duty. Lawyers need training that blends legal judgment with AI literacy. Teams should learn how models behave, where they fail and how to design quality checks. Supervision must adapt too: supervisors should review outputs against risk-based samples and record findings. An audit trail that captures configuration, prompts or instructions, sample results, exceptions and sign-offs creates defensible evidence of care.
Policies matter only if teams use them. Clear playbooks, feedback loops and periodic audits help convert policy into practice. When firms update clause libraries or risk matrices based on real findings, technology use improves. That continuous learning closes gaps and reduces repeat errors.
Insurers watch the claims horizon
Professional indemnity insurers track causes of loss in legal practice. Although AI-driven due diligence promises fewer routine errors, it can concentrate risk in areas where the system underperforms and teams over-rely on it. Brokers and underwriters pay close attention to governance, testing and supervision when they assess firms that lean on AI.
Well-documented controls can support a better underwriting conversation. Firms that demonstrate disciplined testing, defined escalation and strong incident response will find it easier to reassure insurers. Where policies exclude certain technology risks or require notifications on tool changes, firms should review terms and adjust their governance to comply.
What clients need to know—and when
Clients want faster, cheaper and reliable due diligence. Many welcome technology if it preserves quality. Firms can meet those expectations by setting clear scope, explaining review methodologies at a high level and matching pricing to the actual process. Some matters may justify telling clients that the firm will use AI tools, particularly where technology shapes timelines, staffing or cost.
Transparency does not mean technical overload. Plain language about how the firm controls quality, supervises outputs and handles sensitive data can build trust. If a client has specific constraints—sector rules, data residency or deal sensitivities—the firm can tailor its approach and document the choices.
Calls grow for clearer guardrails and shared standards
The Legal Futures warning will likely fuel calls for sector-wide guardrails. Many practitioners favour practical standards over tool-specific rules: minimum testing before live use, baseline sampling rates for high-risk clauses and standardised audit evidence. Vendors, too, face pressure to provide better assurance: model cards, performance benchmarks on common clause types, and configuration guidance for legal use cases.
Shared frameworks would not replace professional judgment, but they would help firms avoid known pitfalls and accelerate safe adoption. They could also ease client concerns and support insurers who need clearer signals in a changing risk landscape.
The Legal Futures blog has sharpened a necessary debate: AI can speed due diligence, but it cannot carry professional responsibility. The SRA’s outcome-focused model gives firms room to innovate, and it also expects clear governance, competent supervision and robust client care. In the months ahead, firms may update engagement terms, strengthen testing and
