UK National Commission calls for evidence on the regulation of AI in healthcare

AI is set to transform and disrupt the way in which healthcare is delivered.  The Government’s 10-year health plan for England commits the NHS to becoming “the most AI-enabled healthcare system in the world”, supported by the delivery of a new regulatory framework for medical devices including AI.

On 18 December 2025 the “National Commission on the Regulation of AI in Healthcare” published its formal Call for Evidence.[1]

What is the National Commission and what questions will it address?

How is AI being used in Healthcare?

The Call for Evidence begins by illustrating a number of early uses cases for AI in UK healthcare:

  • Chatbots and Apps: Nearly 1 in 10 people now use AI-powered chatbots to get health advice. Many apps use AI to analyse health data from devices like smartwatches.
  • Admin Support: Some hospitals use automated systems to invite patients to appointments or screenings.
  • Voice Technology: AI can record and summarise doctor-patient conversations, helping doctors spend less time taking notes. Some services even give patients a summary of their visit and advice based on it.
  • Screening and Diagnosis: AI can help spot diseases (like cancer), support doctors in making treatment decisions, and assist with therapies. Some AI tools focus on specific tasks, while others have a broader range of applications.

It is implicit in the MHRA’s Call for Evidence that these are early use cases and that the level of AI disruption is set to accelerate. The time for careful thought about the regulatory framework is therefore now.

What is the National Commission?

In September 2025, the Medicines & Healthcare Products Regulatory Agency (MHRA) launched a new National Commission on the Regulation of AI in Healthcare.

This  brings together global AI leaders, clinicians and regulators to advise on the development of a new regulatory framework for AI in healthcare. It is chaired by Professor Alastair Denniston.

The Commission’s recommendations will be published in 2026.  The scope of the Commission’s work will include:

  • Reforming the existing regulatory framework to ensure that healthcare AI is “safe, fast and trusted”.
  • How legal liability for adverse outcomes should be managed and distributed.

The Existing Regulatory Framework

The Call for Evidence contains a helpful overview of the existing regulatory framework for AI in healthcare.

The Medical Devices Regulations 2002 ensure that products placed on the market in Great Britain meet performance and safety requirements. AI and other forms of software which have a medical purpose already fall within the definition of “medical devices”.

The Regulations include a requirement for pre-market assessments. For low-risk devices, manufacturers must self-declare conformity with the requirements of the regulations. For medium-risk and high-risk medical devices, approved bodies carry out independent assessments of conformity.

In June 2025 the MHRA introduced new regulations covering the requirement for post-market surveillance.  Once an AI medical device is being used, the manufacturer must keep checking that it is safe and working well.  Manufacturers must have a plan for monitoring their devices, report and investigate serious problems, take action to fix issues and inform customers and regularly review device safety.

The Existing Liability Framework

The Call for Evidence does not summarise the way in which liability for medical AI is covered. However, applicable law in Great Britain currently includes:

  • Product Liability Legislation: The Consumer Protection Act 1987 makes manufacturers liable, subject to a number of statutory defences, where products are “defective” within the meaning of the Act.
  • Clinical Negligence Liability: Hospitals, doctors and other healthcare providers may be liable in tort, at common law, if negligent care is provided.

The Questions in the Call for Evidence: Regulatory Standards

Concerning regulatory standards, the key questions asked by the call for evidence include the following:

  • Is the current regulatory framework sufficient in the following domains: safety and performance standards, data privacy/governance, transparency, requirement for clinical evidence, post-market surveillance?
  • Is the current framework’s impact on innovation too restrictive or too loose?
  • How might the UK’s regulatory framework be improved to ensure fast access to safe and effective AI?
  • How should the regulatory framework manage post-market surveillance for AI health technologies?
  • How could manufacturers of AI health technologies, healthcare provider organisations, healthcare professionals, and other parties best share responsibility for ensuring AI is used safely and responsibly?

The Questions in the Call for Evidence: Liability

Of particular interest to lawyers working in healthcare liability is the focus on liability issues.

In its meeting dated 20 November 2025 the Commission noted:

Commissioners agreed that the lack of clarity around defining AI liability was a significant blocker of its adoption in healthcare”.

Therefore questions in the call for evidence include the following:

  • Whether the existing legal framework of establishing liability for harm caused by healthcare AI is sufficient?
  • Where an AI tool causes an adverse patient outcome, where should liability lie:

a) When the AI tool gives the correct answer, but is incorrectly overridden by the healthcare professional?

b) When the AI tool gives the incorrect answer and the healthcare professional follows it (i.e. they incorrectly choose to trust the AI)?

Where should liability lie?

There will undoubtedly be a range of views on the questions posed by the Call for Evidence. The issue of liability for healthcare AI has been discussed by the author elsewhere.[2] Some brief introductory thoughts are set out below.

My overarching view is that there is no perfect solution. The fair and just determination of who should be liable in any particular case is highly fact sensitive. At a high level, and from a policy perspective, there are arguments for and against all of the options.

In practice, AI will be deployed and supervised by individual healthcare professionals. AI is already being deployed by radiologists in breast imaging and by dermatologists to review skin lesions. Under the EU’s comprehensive new AI Act, “high risk” systems, which include medical AI, must be designed and developed with human oversight. A human in the loop is considered necessary to constrain and override harmful decisions. There are strong arguments that a similar provision should apply in UK law. The logic of that position suggests that human supervisors should also be liable where that function is discharged negligently.

On the other hand, it may not be fair or appropriate to treat healthcare professionals as ‘liability sinks’ for all patient harm arising from the use of medical AI. First, there is the “black box” problem. The speed of processing and the fact that AI models do not process information in natural human language may make it difficult for practitioners to understand why a system has produced a particular answer. Second, AI systems might well act with increasing degrees of autonomy in future. “Agentic AI” refers to AI systems that can take in goals, decide for themselves what steps to take, and then act to carry out those steps without needing constant human instruction. The more autonomously AI behaves, the less ability human supervisors have to predict or prevent adverse outcomes.

What of healthcare institutions? The imposition of liability on NHS Trusts and Private Hospitals would incentivise responsible choice, testing and surveillance of new AI systems. At the level of legal theory, the imposition of liability might be justified like the imposition of vicarious liability for doctors and nurses. According to “enterprise risk” theory it is fair and just for institutions to be liable for harms that are inherent to, and externalised by, their business models. That includes AI enabled business models. Large organisations are in a position to insure against such risks. They can also seek a contractual indemnity from the manufacturers of the AI systems that they deploy.

On the other hand, we risk discouraging the adoption of AI if open ended liability is imposed on healthcare institutions.  This would not be in the public interest. In the words of the MHRA’s consultation:

AI could transform patient care to be safer, faster and more personalised; it could improve productivity in the NHS and wider health and care sector”.

Finally, we may wish to supplement and reform the existing law on product liability. The EU has already led the way with a new Product Liability Directive which expressly encompasses AI systems.[3] One feature of the EU’s liability regime is that claimants do not need to prove negligence or fault. Instead, a product is “defective” if it fails to meet the safety expectations of the general public or the requirements of EU law.  The requirements of EU law include the detailed AI-specific regulations contained in the EU’s new AI Act. Harmonising regulatory and liability standards in this way provides clarity for manufacturers and redress for patients.

On the other hand, some commentators have suggested that the requirements the EU AI Act are overly burdensome and thereby anti-competitive. The MHRA’s paper envisages that AI adoption could “accelerate a thriving health technology sector that supports UK growth”. These ambitions are unlikely to be realised if the liability regime is too onerous and complex.

Achieving the right balance between accountability and innovation will require diverse perspectives. The Commission will benefit from contributions by healthcare professionals, institutions, regulators, technologists, and lawyers.

The call for evidence is open until Monday 2nd February 2026.

Robert Kellar KC is a barrister at 1 Crown Office Row.


[1] https://www.gov.uk/government/calls-for-evidence/regulation-of-ai-in-healthcare

[2] AI in Healthcare: Redefining Liability for Doctors and Hospitals (British Journal of Hospital Medicine, Vol 86, No 9) https://www.magonlinelibrary.com/doi/full/10.12968/hmed.2025.0212

[3] Product Liability Directive (2024/2083)

source  ukhumanrightsblog.com

Please Donate Below To Support Our Ongoing Work To Defend the Scientific Method

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via