Artificial Intelligence (AI) in healthcare is no longer a distant promise, it’s already here. From documentation support to diagnostic assistance, AI tools are being rolled out across clinics, hospitals, and health systems. It is being hailed as a solution to some of healthcare’s most persistent challenges, such as clinician burnout and administrative overload, but while AI technology is advancing rapidly, the systems around it, namely governance, training, and trust, are not keeping pace.
The Promise and the Pressure
In my day-to-day clinical practice, I’ve seen how AI can reduce administrative burden and help doctors reclaim time for patient care. But I’ve also seen how uncertainty and perceptions around its use can lead to hesitation, inconsistency, and missed opportunities. If we want AI to truly support clinicians and help improve patient care, we must focus not just on what this technology can do, but on how we train and educate the people who use it. The first step in doing so is creating a robust framework clinicians can trust.
The Readiness Gap
Elsevier’s Clinician of the Future 2025 reveals a clear gap between interest in AI and readiness to use it effectively. In the UK, only 34% of clinicians currently use AI in practice, and of those, 96% rely on generalist tools like ChatGPT. Additionally, just 17% have received formal training, and only 27% trust their institution’s governance frameworks.
This is a critical gap and a call to action for healthcare leaders. Pulling generalist tools into clinical workflows creates avoidable risk: unclear data provenance, variable output quality, limited explainability, and uncertain handling of sensitive data. These tools often lack transparency around how outputs are generated, and without clinical validation, they can introduce bias that’s difficult to detect but easy to act on. In environments where decisions directly affect patient outcomes, such gaps in transparency and validation are more than technical oversights, they represent a systemic shortcoming. In the absence of clear rules and governance, clinicians face uncertainty and risk when applying AI in practice, often without the support or guidance they need to do so safely.
To build trust and safeguard patients, clinical leaders should champion clinician-led governance, publish an approved set of tools that meet standards for transparency, privacy, and role-based access, and provide short, scenario-based training on safe use and verification. Clinical AI tools must be seen as enablers and tools of care, not as replacements for human judgement, empathy, or clinical expertise.
Patients Are Changing Too
Clinicians aren’t the only ones adapting to the digital shift, patients are evolving rapidly too. According to the Clinician of the Future 2025 report, 35% of UK clinicians believe most patients will self-diagnose with AI in the next 2–3 years, and 54% say misinformation is already undermining treatment adherence. We now operate in an “always-on” information environment, where the gap between concern and self-diagnosis has narrowed dramatically. Increasingly, a portion of the consultation is spent disentangling online narratives before clinical reasoning can begin.
This shift is reshaping the clinician-patient dynamic. The role of the clinician is no longer just to diagnose and treat, but to interpret, clarify, and guide. It’s about helping patients distinguish between credible insights and misleading content, without dismissing their effort to be informed. When done well, this can foster stronger engagement and shared decision-making. But it requires time, empathy, and a new kind of digital fluency, one that must be supported through structured training and responsible implementation of AI tools.
What Responsible, Scalable AI Looks Like
To be truly responsible and scalable, AI in healthcare must be built on three pillars:
- Clinical validation: Tools must be grounded in peer-reviewed evidence and nationally accepted guidelines. Transparency matters. The 2025 Clinician of the Future data shows that 64% of UK clinicians say their trust in an AI tool would increase if it automatically cited references, and 81% say confidentiality of input data is essential. Clinically validated AI tools need to provide transparent citations and draw on trusted, evidence-based content, in order to build clinician confidence in the technology. Being able to access critical responses at the point of care enables you to make informed decisions that helps improve patient care, an example of this today is ClinicalKey AI.
- Transparent AI governance: Institutions must provide clear standards for safe use, including audit trails, role-based access, and oversight mechanisms. Governance should be clinician-led and grounded in real-world practice.
- Continuous training and digital literacy: Training must be embedded into clinical education and tailored to real-world workflows. Without structured support, many clinicians rely on intuition, which can lead to inconsistent and inappropriate use.
Responsible AI isn’t just about the technology, it’s about the systems that support it. Sector-wide collaboration is essential. Progress can only be made if shaped together.
Turning Momentum into Meaningful Change
AI’s potential in healthcare is real. But seeing that potential translated into safe, effective clinical practice requires more than algorithms, it demands investment in people, policy, and trust. The UK healthcare system is at a tipping point. Insights from the Clinician of the Future report show that 71% of clinicians are seeing more patients than two years ago, and 76% say they’re overburdened with admin. AI offers a path to relief, but only if implemented responsibly.
The government’s £10 billion investment in digital transformation, including the Federated Data Platform and NHS App expansion, aligns with ambitions set out in the NHS 10 Year Health Plan, which include delivering faster diagnosis and treatment, and building a digital-first NHS that works for both patients and staff. Yet, the path to realising these ambitions is not without obstacles. Ironically, while fragmented technology and poorly implemented EMRs have often taken clinicians away from patients, AI now has the potential to restore the very human touch that was lost. However, without clinician-led governance, validated tools, and structured AI training, these investments risk falling short. Digital tools must free clinicians to focus on patients and bring much-needed human connection back to consultations, rather than adding further complexity. By working together, the healthcare community can shape progress that goes further, happens faster, and benefits all.
By Dr Rahul Goyal, General Practitioner and Clinical Executive at Elsevier
