Dr Richard Dune

21-02-2025

Why AI in healthcare should be regulated?

Image by DC_Studio via Envato Elements

Exploring the need for strong AI regulation to ensure patient safety, ethical practices, and foster innovation

Artificial Intelligence (AI) is reshaping health and social care at an unprecedented pace, bringing transformative advancements in predictive analytics, automated diagnostics, and clinical decision support systems. AI promises significant benefits; efficiency, cost savings, and improved patient outcomes. However, as AI continues to infiltrate these sectors, the absence of a structured regulatory framework raises serious concerns about patient safety, bias, and accountability.

To ensure AI tools serve patients, providers, and health systems ethically and safely, they must undergo the same regulatory rigour as pharmaceuticals and medical devices.

In this blog, Dr Richard Dune outlines the necessity for a gold-standard AI regulation framework and explains the key components that must be implemented to create a safe, transparent, and accountable AI ecosystem in healthcare.

AI in health and social care - Why regulation is crucial

The development of AI in healthcare is progressing rapidly, but it remains largely unregulated, which leaves AI-driven tools operating in an uncharted space, often deployed before they are fully scrutinised. Unlike new drugs, medical devices, and treatments that undergo rigorous clinical trials and regulatory assessments, AI tools that influence clinical decisions often lack the same level of oversight. This regulatory gap is a fundamental risk to health and social care systems worldwide. Without appropriate governance, the deployment of unregulated AI could result in unsafe medical decisions, bias, and loss of public trust.

We must ask ourselves: ‘Why do AI systems that directly impact patient care not go through the same rigorous process as pharmaceuticals or medical devices?’

While AI advocates argue that regulation may stifle innovation, this is a flawed premise. For AI to truly support diagnoses, suggest treatments, and optimise clinical workflows, it must meet the same standards as other medical innovations. Just as we would not allow an untested drug to be given to patients, we cannot afford to deploy unvalidated AI systems into clinical environments.

The risks of unregulated AI in health and social care

AI has the potential to transform healthcare by enhancing patient care, but without the proper regulatory measures in place, it poses significant risks:

Algorithmic bias and health inequality

AI models are trained on historical data, which may reflect biases in past healthcare decisions. These biases could lead to disparities in care, reinforcing existing inequalities. For example, AI-powered diagnostics trained predominantly on data from white male patients may produce inaccurate results for women or ethnic minorities.

Regulation solution - AI regulation must ensure diverse, representative datasets and ongoing monitoring to detect and mitigate bias in AI models, ensuring equitable and accurate healthcare delivery for all patients.

Lack of Transparency (Black-box decision-making)

A key concern with AI in healthcare is the "black-box" nature of many algorithms. AI systems can often operate without providing clear explanations, making it difficult for clinicians to understand or challenge their recommendations. In a sector where clinical accountability is paramount, this lack of transparency can undermine trust in AI systems.

Regulation solution - Regulatory frameworks should require AI developers to implement explainable AI (XAI) systems, allowing clinicians to investigate and challenge decisions before taking action, ensuring a higher level of accountability and trust.

Regulatory gaps and ethical concerns

Currently, many policymakers and health leaders lack the necessary expertise to assess AI technology properly. This absence of clear governance allows AI adoption to be driven primarily by tech companies and commercial interests rather than patient safety and clinical needs.

Regulation solution - Comprehensive AI regulation should include independent validation of AI tools before deployment in clinical settings, clear accountability frameworks for AI-driven decisions, and ethical guidelines prioritising patient autonomy and consent in AI-driven care.

Workforce readiness and training

AI should augment human expertise, not replace it. However, healthcare professionals must be adequately trained to work alongside AI tools. If AI adoption outpaces workforce development, there is a risk of overreliance on automated decisions without the necessary human oversight.

Regulation solution
- AI regulation should require structured AI competency training for clinicians and healthcare professionals, ensuring responsible and effective use of AI in clinical settings.

Global disunity in AI governance - The need for a unified approach

Despite AI's undeniable potential, global regulation remains fragmented. Recent discussions at the Paris AI Summit (February 2025) and the Munich Security Conference (15 February 2025) highlighted the division between AI policy approaches in different regions. The European Union and China push for stricter controls, while the United States and the United Kingdom advocate for a more flexible, innovation-driven approach. This divide creates risks, as AI developers could deploy systems in less-regulated regions, exacerbating disparities in patient safety and care quality.

We must ask: ‘Would we accept a healthcare system where drugs and treatments were regulated in some countries but not in others?’

To ensure patient safety globally, we must move towards multilateral cooperation. The UK, US, EU, and other major AI stakeholders must align on baseline safety and ethical standards, creating a unified framework that protects patients while promoting innovation.

Bridging the gap - A call for a gold-standard AI regulation framework

To ensure AI serves health systems safely and ethically, we must develop a comprehensive regulatory framework prioritising patient safety, transparency, accountability, and global cooperation. Here’s how a gold-standard regulation framework can be achieved:

AI clinical trials and regulatory oversight

AI systems influencing clinical decisions should undergo the same rigorous validation processes as medicines and medical devices. This includes:

  • Pre-market validation - AI systems must be tested through controlled clinical trials to prove their safety, efficacy, and cost-effectiveness, ensuring they meet the necessary clinical standards before deployment.
  • MHRA & NICE oversight - AI-driven diagnostic tools and treatment recommendations should undergo scrutiny from regulatory bodies like the Medicines and Healthcare Products Regulatory Agency (MHRA) and the National Institute for Health and Care Excellence (NICE) to ensure they meet clinical efficacy and cost-effectiveness standards.
  • Post-market surveillance - AI tools must be subject to continuous monitoring post-deployment to ensure that their real-world performance matches pre-market testing and remains effective and safe throughout their use.
Algorithmic transparency and accountability

For AI to be trusted and used effectively in healthcare, developers must ensure transparency in how AI systems make decisions:

  • Explainability requirements - AI systems must provide clinicians with clear, interpretable outputs to facilitate informed decision-making.
  • Bias detection and mitigation - Regular audits must be conducted to detect and correct any biases in AI models, ensuring they provide accurate outcomes for all patient demographics.
  • Clear accountability structures - Responsibility for AI-driven errors must be clearly defined, whether at the developer, provider, or clinician level.
Workforce training and AI competency standards

Healthcare professionals must be equipped to use AI responsibly:

  • AI literacy for healthcare professionals - Comprehensive training in AI best practices, ethical considerations, and risk mitigation should be standard for all clinicians.
  • Clinical AI certification - AI-driven decision-support tools must undergo certification processes, similar to medical equipment approvals, to ensure they meet safety and efficacy standards before being used in clinical settings.
International AI regulation and global standards

To prevent discrepancies in AI deployment across borders, a coordinated global effort is necessary:

  • Multilateral cooperation - Major AI players, such as the UK, US, EU, and China, must align on baseline safety and ethical standards to prevent fragmented regulation and ensure patient safety worldwide.
  • Public-private collaboration - Governments, healthcare providers, and tech companies must collaborate to create and implement AI regulatory frameworks that balance innovation and safety.

Conclusion - AI Regulation is a necessity, not an obstacle

AI has immense potential to improve healthcare, but its deployment cannot be left to the market alone. Without regulation, AI poses significant risks, including exacerbating health inequalities, reducing transparency in clinical decision-making, and introducing patient safety risks.

Just as no new drug is introduced to the market without rigorous trials and regulatory approval, no AI system impacting clinical care should be used without equivalent safeguards. The UK’s innovation-first approach must be balanced with structured oversight to ensure AI benefits patients, not just tech companies.

Now is the time to act before we face the irreversible consequences of unregulated AI, which could lead to misdiagnoses, biased treatments, and patient safety failures that may take decades to resolve.

Call to action - Driving compliance and innovation with ComplyPlus™

As the development and integration of AI in healthcare continue to evolve, the need for robust regulatory frameworks becomes increasingly clear. The development of ComplyPlus™ has been driven by a commitment to ensuring healthcare providers meet the highest compliance standards while embracing innovation.

ComplyPlus™ empowers organisations to manage regulatory compliance, workforce training, and AI competency, ensuring they can safely integrate AI solutions into their practices. Fill in this form or contact us at +44 24 7610 0090 to learn how ComplyPlus™ can help your organisation navigate the complex regulation and compliance landscape.

About the author

Dr Richard Dune

With over 25 years of experience, Dr Richard Dune has a rich background in the NHS, the private sector, academia, and research settings. His forte lies in clinical R&D, advancing healthcare tech, workforce development, and governance. His leadership ensures that regulatory compliance and innovation align seamlessly.

Taking a closer look at Steve Jobs's

Contact us

Complete the form below to start your ComplyPlusTM trial and transform your regulatory compliance solutions.

Just added to your wishlist:
My Wishlist
You've just added this product to the cart:
Go to Basket

#title#

#price#
×
Sale

Unavailable

Sold Out