Health care is a tightly regulated industry, so IT professionals hoping to implement artificial intelligence must do so carefully while prioritizing the protection of patient privacy and data. Many users find that AI can strengthen regulatory compliance when deployed responsibly. What strategies should health care IT professionals use to implement AI successfully?
Select Commercial Tools Aligned With Privacy Laws
Using an off-the-shelf product to enhance regulatory compliance is often the easiest initial approach, especially if entities lack the resources for customized builds. When researching possible solutions, IT teams should look for offerings that function according to the federal law restricting the release of medical information.
AI tools featuring relevant built-in functionality are good choices because they indicate developers have thought carefully about expanding their applicability to more people. Such was the case for a popular meeting notes app that recently became compliant with the health care industry’s privacy laws. This enhancement means the product has advanced encryption protocols and secure data storage to safeguard sensitive medical information. It also includes rigorous access controls, ensuring only authorized individuals can retrieve confidential content.
IT team members also regularly monitor the tool’s performance and look for improvement opportunities for compliance and other goals through periodic audits. These commitments help decision-makers in health care organizations use AI confidently without sacrificing privacy.
Maintain a Goal-Oriented Mindset
The wide availability of AI tools can make some people tempted to integrate them before developing concrete use cases. That approach poses regulatory and compliance risks because it may result in parties trying AI technology in various ways without considering potentially adverse outcomes.
Assessing the existing challenges is a good starting point. Quality management leaders working in hospitals must understand thousands of performance elements within hundreds of standards and determine whether their facilities achieve the minimum requirements. AI can improve traditionally manual processes by offering real-time analytics that show progress and areas for improvement. Some platforms provide actionable insights, giving authorities the information needed to drive lasting results while improving safety, productivity and other desirable outcomes.
IT teams should speak directly to the parties who will use AI, asking for feedback about their biggest challenges, most time-consuming processes and other shortcomings for the technology to target. The responses can uncover feasible options to address those obstacles, meet the stated goals and maintain compliance. Training staff in data management practices also enhances implementation by setting expectations and equipping workers to achieve them.
Stay Abreast of New Tools That Meet Regional Regulations
AI’s popularity and promise have spurred development teams to create products for highly regulated industries, including health care. Remaining aware of relevant progress helps IT professionals know which companies may be worth exploring soon, even if the innovations are not yet widely available.
The European Union categorizes medical devices into four risk classes, including some items with AI capabilities. Its regulators recently approved a decision-support tool that uses large language models to help clinicians make appropriate patient care conclusions. It is the first product of its kind to receive risk class IIb certification, which designates items with medium-high risks.
The product processes questions and other naturally phrased input, providing user-friendly responses with corresponding references. Its developers created it in accordance with all regulatory requirements. The tool pulls information from specially validated medical specialist sources instead of freely available content, increasing its trustworthiness.
Follow the Regulator’s Approach to Tech Adoption
Good examples of how to use AI responsibly can come from the regulators themselves. The United States Food and Drug Administration (FDA) recently announced an innovation that many of its employees use in daily tasks. It is a large language model AI tool that assists users with reading, writing and summarizing information. Leaders and technology experts across the agency collaborated to build the resource and demonstrate its capabilities. They also incorporated numerous safeguards to protect data.
Developers built it within a secure, government-specific cloud. They ensured it would keep information inside the agency while allowing users to access it as needed. The tool’s training data does not include content submitted by parties working in regulated industries. That approach protects sensitive data, ensuring only trained FDA workers handle it.
Hospitals’ needs differ from those of regulatory agencies, but this example provides practical best practices that can be adapted for new applications. Restricting aspects — including the data’s storage, access controls and content used to train AI algorithms — significantly reduces the likelihood of unintended consequences by setting operation parameters for the tools.
Pursue Continuous Improvement
Even well-intentioned IT professionals planning to bring AI into hospitals to enhance regulatory compliance may encounter pitfalls. Learning from those experiences encourages people to stay motivated by recognizing what works and which characteristics need improvement.
IT teams should use data-driven approaches while following the above suggestions. Choosing metrics to monitor before, during and after AI rollouts can help decision-makers determine if they are on track to meet necessary compliance requirements and desired outcomes or need to make targeted adjustments. Feedback from administrators, clinicians, patients and other stakeholders also reveals what people like and which features or processes frustrate or confuse them.
By Zac Amos, ReHack

