Site icon

The Cybersecurity Imperative: Protecting Healthcare Data in the Age of GenAI

The Cybersecurity Imperative - Protecting Healthcare Data in the Age of GenAI

Image | AdobeStock.com

Generative AI (GenAI) is transforming multiple areas of healthcare, including patient care, virtual health assistants, and internal decision-making processes. This shift is largely welcomed, with 54% of the UK public and 76% of NHS staff supporting the use of AI in patient care. An even higher number of public and NHS staff approved the usage of GenAI for administrative applications.

However, this enthusiasm comes with significant security challenges. Immersive Labs’ recent study found that GenAI is highly vulnerable to manipulation and exploitation, even from non-technical people. In fact, 88% of participants in the research were able to manipulate GenAI bots to release sensitive information.

If healthcare organisations want to implement GenAI, they must enhance their understanding of the data these systems possess and the tactics that could be used to manipulate AI bots and implement the right safeguards. Addressing these vulnerabilities is crucial to protect sensitive patient data and maintain trust in these emerging technologies.

Tricks and tactics to outsmart GenAI bots

As GenAI becomes more integrated into healthcare, it’s essential to understand the risks associated with its use. The biggest issue that our study highlighted was that even those without a security background could easily manipulate GenAI chatbots into revealing sensitive information using straightforward but effective techniques, which significantly lowers the entry-bar for cybercriminals. In healthcare the risk is the potential exposure of patient data and other confidential information.

A technique such as role-playing is extremely effective when trying to manipulate GenAI bots into revealing sensitive information. By encouraging the bot to take on personas less concerned with confidentiality, manipulators can craft scenarios where sharing sensitive information appears justifiable.

For instance, consider a local GP with long appointment wait times and increasing patient complaints. The organisation might implement GenAI chatbots to reduce wait times and improve triage. The chatbot would ask patients about their symptoms and try to prioritise callbacks or appointments. Such chatbots would generally be trained on a vector database with transcripts of calls and online messages to base their responses on.

If this data set is not properly filtered for any PII or sensitive data before being provided to GenAI bot, attackers could exploit the inherent vulnerabilities of such systems to extract sensitive information.

An attacker could potentially ask the GenAI, “Has anyone reported symptoms of headache and loss of appetite?” If the chatbot replies, “Yes, on August 19th, Mrs. Jane Doe called with similar symptoms,” it has inadvertently disclosed private information.

This may seem like a relatively benign example, but it’s very plausible and the implications are far-reaching. Any data accessible to the GenAI system must be treated as “publicly accessible.”

Most developers will rely on prompt engineering to prevent this type of behaviour, providing explicit instructions not to reveal sensitive data, ignore specific instructions, or behave in specific ways. However, our research shows that no matter what instructions are provided, human ingenuity always wins in the end, and sensitive data can be revealed.

In addition to role-playing, manipulators often employ indirect tactics, such as dropping hints or asking leading questions, to persuade the information out of the bot. They might pose as event organisers or authoritative figures, exploiting perceived obligations or authority to elicit responses. By constructing these scenarios, manipulators effectively lower the bot’s programmed defences, making it more susceptible to revealing sensitive details.

These manipulative tactics are often subtle initially, with adversaries maintaining a neutral tone to avoid raising suspicion. However, if the bot resists, manipulators may escalate their approach, employing emotional appeals ranging from friendly persistence to more aggressive or demanding tones.

However, attackers can only exploit these weaknesses when GenAI systems are poorly designed or integrated. The manipulative techniques outlined above largely depend on prompt injection vulnerabilities and poor development practices.

Manipulating AI to extract sensitive information is a serious issue, particularly in healthcare, where it jeopardises confidential patient records. Such breaches can compromise patient privacy, disrupt business continuity, and endanger individuals’ well-being and safety.

Enhancing security for GenAI bots

Given the sophisticated methods used to exploit GenAI systems, adopting a comprehensive “defence in depth” strategy is vital for the healthcare sector. This approach involves layering multiple security measures to ensure that no single point of failure can be easily targeted.

Additional key protective steps include implementing data loss prevention (DLP) systems, enforcing strict input validation, and using context-aware filtering to detect and block manipulation attempts.

Such measures help to prevent the GenAI from being coerced into disclosing sensitive information.

Healthcare organisations should also develop clear policies governing the use of AI, crafted by a multidisciplinary team of legal, technical, information security, and compliance experts. These policies should address data privacy, security, and regulatory compliance, aligning with frameworks like GDPR or CCPA to protect sensitive data.

Additionally, establishing fail-safe mechanisms and automated shutdown procedures can limit the impact of any GenAI system anomalies. Regular data and system configuration backups are crucial for swift recovery in case of malfunctions or breaches.

Finally, a “secure by design” philosophy should be embedded throughout the GenAI development lifecycle. Adhering to security guidelines from bodies like the National Cyber Security Centre (NCSC) ensures that robust defences are built into systems from the outset rather than being added as an afterthought.

Future-proofing healthcare with GenAI in mind

As GenAI continues to transform healthcare, addressing its security vulnerabilities, particularly those related to manipulation, is crucial. Healthcare organisations must understand and guard against tactics that exploit GenAI, risking sensitive patient data and system integrity.

Implementing a “defence in depth” strategy, developing strong policies, and embedding security from the design phase are vital measures. By doing so, the healthcare sector can protect against sophisticated manipulations, ensuring that GenAI remains a secure and trusted tool for improving patient care and operational efficiency. Proactive security measures will help secure the future of digital health.

By Kev Breen, Senior Director Cyber Threat Research at Immersive Labs

Exit mobile version