You have no items in your shopping basket.
Dr Richard Dune
21-02-2025
Why AI in healthcare should be regulated?
Image by DC_Studio via Envato Elements
Exploring the need for strong AI regulation to ensure patient safety, ethical practices, and foster innovation
Artificial Intelligence (AI) is reshaping health and social care at an unprecedented pace, bringing transformative advancements in predictive analytics, automated diagnostics, and clinical decision support systems. AI promises significant benefits; efficiency, cost savings, and improved patient outcomes. However, as AI continues to infiltrate these sectors, the absence of a structured regulatory framework raises serious concerns about patient safety, bias, and accountability.
To ensure AI tools serve patients, providers, and health systems ethically and safely, they must undergo the same regulatory rigour as pharmaceuticals and medical devices.
In this blog, Dr Richard Dune outlines the necessity for a gold-standard AI regulation framework and explains the key components that must be implemented to create a safe, transparent, and accountable AI ecosystem in healthcare.
AI in health and social care - Why regulation is crucial
The development of AI in healthcare is progressing rapidly, but it remains largely unregulated, which leaves AI-driven tools operating in an uncharted space, often deployed before they are fully scrutinised. Unlike new drugs, medical devices, and treatments that undergo rigorous clinical trials and regulatory assessments, AI tools that influence clinical decisions often lack the same level of oversight. This regulatory gap is a fundamental risk to health and social care systems worldwide. Without appropriate governance, the deployment of unregulated AI could result in unsafe medical decisions, bias, and loss of public trust.
We must ask ourselves: ‘Why do AI systems that directly impact patient care not go through the same rigorous process as pharmaceuticals or medical devices?’
While AI advocates argue that regulation may stifle innovation, this is a flawed premise. For AI to truly support diagnoses, suggest treatments, and optimise clinical workflows, it must meet the same standards as other medical innovations. Just as we would not allow an untested drug to be given to patients, we cannot afford to deploy unvalidated AI systems into clinical environments.
The risks of unregulated AI in health and social care
AI has the potential to transform healthcare by enhancing patient care, but without the proper regulatory measures in place, it poses significant risks:
Global disunity in AI governance - The need for a unified approach
Despite AI's undeniable potential, global regulation remains fragmented. Recent discussions at the Paris AI Summit (February 2025) and the Munich Security Conference (15 February 2025) highlighted the division between AI policy approaches in different regions. The European Union and China push for stricter controls, while the United States and the United Kingdom advocate for a more flexible, innovation-driven approach. This divide creates risks, as AI developers could deploy systems in less-regulated regions, exacerbating disparities in patient safety and care quality.
We must ask: ‘Would we accept a healthcare system where drugs and treatments were regulated in some countries but not in others?’
To ensure patient safety globally, we must move towards multilateral cooperation. The UK, US, EU, and other major AI stakeholders must align on baseline safety and ethical standards, creating a unified framework that protects patients while promoting innovation.
Bridging the gap - A call for a gold-standard AI regulation framework
To ensure AI serves health systems safely and ethically, we must develop a comprehensive regulatory framework prioritising patient safety, transparency, accountability, and global cooperation. Here’s how a gold-standard regulation framework can be achieved:
Conclusion - AI Regulation is a necessity, not an obstacle
AI has immense potential to improve healthcare, but its deployment cannot be left to the market alone. Without regulation, AI poses significant risks, including exacerbating health inequalities, reducing transparency in clinical decision-making, and introducing patient safety risks.
Just as no new drug is introduced to the market without rigorous trials and regulatory approval, no AI system impacting clinical care should be used without equivalent safeguards. The UK’s innovation-first approach must be balanced with structured oversight to ensure AI benefits patients, not just tech companies.
Now is the time to act before we face the irreversible consequences of unregulated AI, which could lead to misdiagnoses, biased treatments, and patient safety failures that may take decades to resolve.
Call to action - Driving compliance and innovation with ComplyPlus™
As the development and integration of AI in healthcare continue to evolve, the need for robust regulatory frameworks becomes increasingly clear. The development of ComplyPlus™ has been driven by a commitment to ensuring healthcare providers meet the highest compliance standards while embracing innovation.
About the author
Dr Richard Dune
With over 25 years of experience, Dr Richard Dune has a rich background in the NHS, the private sector, academia, and research settings. His forte lies in clinical R&D, advancing healthcare tech, workforce development, and governance. His leadership ensures that regulatory compliance and innovation align seamlessly.

Contact us
Complete the form below to start your ComplyPlusTM trial and transform your regulatory compliance solutions.