Dr ChatGPT: Why the Future of Care Depends on Clinician-AI Collaboration

Dr ChatGPT - Why the Future of Care Depends on Clinician-AI CollaborationImage | Google Gemini

“The doctor will see you now.” Except this ‘doctor’ doesn’t wear a white coat or require patients to book an appointment. It’s available 24/7, ready to immediately answer any concern the patient has. It’s also prone to ‘hallucinations’. This is Dr ChatGPT, and it could soon be the world’s most called-upon doctor. But it remains far from the answer to healthcare’s biggest challenges.

A recent survey of 2,000 UK patients revealed almost one in four (24%) already use AI for health guidance, with 30% saying they would consider using AI or social media instead of waiting to be seen by a clinician. It marks a formative moment for healthcare. Patients are becoming active participants in their care and seeking reassurance, even before they reach a clinical setting. But this self-diagnosis comes with its own inherent risks.

Rather than resisting the shift to AI, healthcare has the chance to empower people to use it responsibly and confidently. There is a real opportunity for clinicians to elevate their role as trusted experts, guiding patients through a new era of information and access.

The future lies in an active collaboration between clinicians, AI and patients working together to improve care.

Why patients are quickly turning to ChatGPT

We’ve seen similar trends like this before: Dr Google was the previous iteration of Dr ChatGPT. But, one important shift lies in how these tools are designed to be used. A search on Google typically points users towards a range of web pages, leaving them to make sense of often complex information. By comparison, ChatGPT is built to interpret and explain information in a conversational way, tailoring its response to the individual’s question. This move from simply searching for answers to having them explained underpins the growing curiosity, showing why so many patients are turning to AI for health advice.

Convenience and availability is another major part of it. The ability to simply type your concerns from the comfort and privacy of your home offers a far easier way to receive healthcare advice than the process of booking and attending an appointment. Younger generations – especially Gen Z, who have lived a far greater portion of their lives online than previous cohorts – are instinctively turning to AI. According to the survey, 34% of 16-25-year-olds use ChatGPT for healthcare advice, with 30% tuning into TikTok, illustrating these platforms’ firm place in the cultural mainstream.

Above all, the rising use of AI is a natural response to a system under pressure. Patient waiting times remain a critical problem and, as digital transformation lags behind their expectations, people are seeking out ways to gain more control of their care. Yet this trend also carries risks and, if not handled correctly, could exacerbate health outcomes.

The risks of AI reliance and misinformation

AI models like ChatGPT have a tendency to hallucinate and generate false answers. A new Guardian investigation has shown some of Google’s AI Summaries, which pull information in the same way as ChatGPT, “served up inaccurate health information, putting users at risk of harm” – a patient with liver failure could be misled by ChatGPT into thinking their recent liver function test was normal, for example, illustrating the serious risks of AI hallucinations in healthcare.

Another pitfall of AI models is they can reinforce a patient’s existing beliefs. Given a third of UK citizens already use AI for emotional support, this poses a growing risk. When a patient receives such information, they might delay critical care or misinterpret symptoms, causing unnecessary anxiety.

With all this in mind, it would be entirely understandable for the healthcare sector to push back against patients using AI in place of a doctor. But attempting to prevent this behaviour is unrealistic and, arguably, shouldn’t be the goal.

AI is an opportunity, not a problem (with the right mindset)

Even if patients are receiving ongoing medical support, curiosity, anxiety or convenience, will attract many to AI platforms. Their use is only set to grow. As we negotiate a future where digital and human care intersect, healthcare professionals need to learn to coexist with AI tools. It’s about navigating this new reality safely and responsibly.

Clinicians should see AI as a chance to enhance their role as trusted experts and nurture an open dialogue with the patients they treat. The use of AI and social media doesn’t mean people won’t also go to a doctor. What they might do, however, is come to appointments with their AI-generated advice. In this case, clinicians shouldn’t simply discard their digital findings from the get-go. Rather than dismissing AI-generated insights, clinicians can use them to engage patients in a focused discussion about their questions and the information they received, turning curiosity into meaningful guidance.

Clinicians are well positioned to explain how generative AI models produce information and how these answers can lack clinical judgement and contextual awareness. By reviewing suggestions collaboratively, for instance, they can help patients understand their AI-generated information in the right clinical context – showing patients what is reliable, what is potentially harmful and the reasoning behind it. In creating this dialogue, the clinician can tackle misinformation and gain a deeper knowledge of their patient’s true worries.

Studies show that patients who feel listened to and respected are more likely to share openly, follow medical advice and stay actively engaged in their care. Attentive and empathetic responses from clinicians reinforce trust and help patients in navigating their care safely and effectively.

A collaborative future of care

Ultimately, clinicians need to see AI as a key component of their dialogue with patients in the room, not as a competitor. The basis of their work is already grounded in context and evidence. With the rapid growth in AI and social media advice, they now need to translate their patient’s digital findings into responsible and professionally-guided information.

By Christoph Lippuner, co-founder and CEO, Semble