AI-driven technology is transforming how healthcare is delivered, opening up new possibilities for clinicians – from improving the accuracy and speed of diagnosis, to paving the way towards faster and more effective treatments, and getting life-saving drugs to market faster.
As the frontline operators of AI, clinicians play a significant role in its acceptance and implementation. It is critical that they are engaged and on-side if technology adoption is to be successful.
Clinicians’ attitudes towards AI are improving – whereas previously almost half were anxious about AI, they now have a greater understanding of the opportunities presented by the technology, and positivity is growing. However, reservations persist, and it is important that both healthcare leaders and AI vendors take heed of these concerns – and take steps to resolve them.
Time constraints are a barrier to AI take-up
The growing shortfall of healthcare workers coupled with a post-pandemic excess of patients seeking care means that clinicians are facing more demands on their time than ever before.
It’s almost a Catch-22 situation – clinicians are in desperate need of new technologies to ease their impossible day-to-day workloads, but find themselves so pressed for time that engaging with and implementing these technologies often feels like an impossible task. When it comes to technologies as powerful and potentially daunting as AI, it’s not surprising that many of them are resistant to this change.
This resistance has been further exacerbated by US healthcare’s lengthy history of tech tools being created and implemented to serve administrative objectives over and above clinical practicalities and needs.
Successful tech implementation is impossible without the buy-in of the users – this is true in any industry, and even more so in a highly-pressured environment like healthcare. For AI technologies to be successful, they need to put frontline clinical staff front and center. These technologies need to be designed with an accurate and considered understanding of how clinicians do things and the challenges they face, while also giving clinicians the opportunity to direct how the technology could be improved in future.
Machines and health equity
There are also understandable concerns amongst clinicians that AI will continue – or even accentuate – the unconscious bias in healthcare, which many believe is driving poorer healthcare access and outcomes for disadvantaged groups.
Bias can certainly emerge through the data sets used to train AI’s deep learning model: if the data is skewed, the machine learning model will also be skewed. For example, if older Caucasian populations make up the majority of collected data sets, the resulting algorithm may not be as applicable to younger people or to people from minority ethnic groups – and the AI systems will therefore not work as effectively for these groups. It’s crucial, therefore, that AI systems reflect the diversity of the populations they are being used for. Only in this way will they be able to improve health equity, rather than increasing the already rife inequality.
It’s worth acknowledging that purely human-run clinical practice is also biased. The difference, however, is that in AI systems, bias can be detected and corrected much more easily.
AI as a collaborator, not a replacement
Although a consensus is emerging that AI is not designed to replace humans, but rather to assist them in the clinical process, there remain significant numbers of clinicians who are intimidated by the possibility that AI’s growing abilities could render their jobs redundant.
While understandable, this is far from the truth.
Although it’s accurate to say that humans can’t compete with machines in terms of scale or speed of analysis, there are nevertheless limitations to AI’s capabilities. For example, AI can’t deal with information that falls outside of a recognizable pattern, and therefore always needs human input. Face-to-face patient interaction is another area that requires a living, breathing human – and ethically, this should always be the case.
In screening processes – for example in areas such as radiology and histopathology – robust and mature AI technology can act as the primary observer thanks to deep learning models, built on extensive data sets, that enable the technology to pick up on signs that the clinician may otherwise have missed. The tool can then direct clinicians to where they should focus closer attention.
A recent piece of ground-breaking research by Moorfields Eye Hospital in the UK, DeepMind Health and UCL and published in Nature Medicine, showed that an AI system performed as well as world-leading experts when detecting serious eye conditions. AI made the correct referral decision for over 50 eye diseases to a 94% accuracy level, showing promise for the technology to be used to help clinicians pick up on potentially sight-threatening eye disease earlier in the disease process and to prioritize patients who urgently require treatment.
By processing large amounts of data at scale, AI provides clinicians with more information, enabling highly accurate decision making, while also increasing human productivity. This frees up clinicians to spend more time doing higher-value work, and to spend more time in direct patient care.
Closing the care gap
Health systems are facing historic staffing shortages and rising operational costs, while also contending with an increasing influx of patients. These unprecedented pressures are pushing health systems to their limits, challenging their ability to provide care access to the communities they serve.
AI holds significant potential to close this gap, by increasing clinician productivity and opening up greater access to care. But the full potential of AI will be impossible to realize without the industry and AI developers alike working to better understand and address the concerns of clinicians – the frontline implementers of this technology.
By Chris Tackaberry, co-founder and CEO of Clinithink