In the world of healthcare, AI is said to be outperforming humans in many areas; outshining radiologists in early detection in breast cancer screening, beating therapists in online counselling, and even outsmarting molecular biologists in identifying drug targets. Whilst, it’s hard to predict exactly what our healthcare system will look like in the near future, one thing I am certain of is that AI must not, and will not, replace human clinicians.
It is easy to see why so many are captivated by the idea. Healthcare systems are buckling under pressure. Waiting lists are growing. Workforce shortages are acute. In the face of these challenges, AI is being presented as a kind of miracle cure. It offers speed, efficiency, and scale. It never tires or needs a break. To an overstretched system, the promise of machines stepping in to fill the gaps is understandably appealing.
But I believe that an overreliance on using AI in healthcare would be a profound mistake. The risk is that in our enthusiasm to embrace these technologies, we forget that healthcare is not a system of processes, but a system of people.
AI systems, no matter how advanced, remain limited by the data they are trained on. And healthcare data is rarely as clean, comprehensive, or unbiased as we would like to believe. Training a model on millions of data points might sound impressive, but if that data reflects historic biases, missing information, or underrepresented populations, the model can simply reinforce those problems at scale.
In a widely cited US study, an algorithm used to allocate healthcare resources underrepresented the level of care Black patients needed, because the training data used past healthcare spending to judge patient need. Since Black patients historically received less care, the AI learned to perpetuate that inequity, exacerbating existing disparities rather than fixing them. Simply put, an algorithm that works beautifully in theory can fail catastrophically in the messy complexity of real-life medicine.
The consequences in healthcare treatment are not simply inconvenient. They are dangerous, sometimes fatal. When a social media app makes an error, it leads to a poor user experience. When an AI model makes a clinical error, it risks a patient’s health, safety, or even life. In medicine, there is no such thing as an acceptable failure rate when real lives are involved, you need to be 100% sure, 100% of the time.
The real danger is not that AI will fail entirely, but that it will succeed just enough to lull us into complacency. One of the most concerning phenomena I see emerging is automation bias. As clinicians become more accustomed to AI outputs, there is a growing risk that they may defer too readily to the machine’s suggestion, even when their clinical instincts suggest otherwise. Once a clinician begins to doubt their own judgement in favour of machine recommendations, we risk undermining the critical thinking and professional responsibility that are the bedrock of safe care.
AI Training is Critical
To mitigate this risk, it is essential that AI training becomes an integral part of clinical education. Just as medical professionals are trained to interpret lab tests and imaging results, they must also be trained to critically evaluate AI-generated outputs. This includes understanding how algorithms work, recognising potential biases, and knowing when to question or override AI recommendations. Equipping clinicians with AI literacy will empower them to use these tools safely, enhancing rather than eroding their own clinical judgement and responsibility.
I also worry that patients will increasingly feel shut out of their own care. Already we hear stories of patients unable to reach a human being, navigating layers of automated triage systems. This risks turning healthcare into a series of transactions which strip out the human touch points that actually get to the heart of real people’s needs when someone is afraid or in pain. If we are not careful, the very technology meant to increase access could end up undermining the patient experience entirely.
This concern is especially acute when we think about vulnerable patient groups such as children. Young patients often experience fear or anxiety when entering healthcare settings and meeting unfamiliar professionals. No robot or algorithm can comfort a frightened child or establish the trust necessary to facilitate care. Human empathy is not just preferable in these moments; it is essential for both effective treatment and the child’s emotional well-being. This applies equally in fields such as obstetrics, psychiatry and palliative care, where nuanced human communication and reassurance are integral to patient outcomes.
That said, in more routine or administrative interactions, well-designed AI systems are already showing promise. Not every question requires a human; and the sorts of questions where you need a high quality process or ‘how to’ answer quickly are well suited to chatbots. Here, AI could help alleviate some administrative burdens, streamline access and even reduce waiting lists for simpler cases, provided they are carefully designed to mirror human-like interactions and understanding.
AI should augment, not replace humans
I believe the best future for healthcare is one where AI and humans work in true partnership, albeit with humans leading. AI can support faster diagnosis, surface clinical insights and bring greater consistency to care. It can handle repetitive tasks, giving clinicians more time with patients. These are all powerful contributions. But none of them replace the need for human insight, empathy, and accountability.
The key to achieving this future is to design AI systems that are built to serve clinicians and humans, not replace them. That means ensuring that human oversight remains at the core of every decision. It means training AI on carefully curated, diverse, and clinically validated data rather than relying on raw volume alone. And it means training clinicians not to defer to AI, but to engage with it critically, understanding its limitations and retaining ultimate responsibility for patient care.
AI is not a threat to healthcare, it is a tool. But like any tool, its value depends entirely on how it is used. If we allow ourselves to believe that AI can substitute for humans, we will lose something essential. Whereas if we build systems where AI enhances human care without replacing it, we can deliver better, safer, more compassionate care than ever before. With humans leading healthcare, and AI providing support, anything is possible.
By Dr. Sonia Szamocki, CEO and Founder of 32Co