One doesn’t have to look far to find dubiously ethical applications of artificial intelligence (AI). Recently, documentarians have used AI to generate artificial soundbites in the voice of a deceased biopic subject. Social media apps have applied AI-powered beauty filters to users’ faces without their consent.
The stakes of AI ethics are especially high in the healthcare sector. Medical professionals, who are bound by codes of ethics and federal regulations, must exercise due diligence when processing data. Otherwise, they could risk causing harm if patient medical data is not carefully applied.
Better understanding of potential AI pitfalls and new standards for its ethical use in healthcare are helping professionals theorize the best way to use it. These new guidelines can help practitioners leverage the technology in a way that minimizes potential risk.
The Ethical Challenges of AI and Big Data in Healthcare
Medical researchers and healthcare providers face two major challenges related to AI: the replication of bias in algorithms and the ethical use of patient medical data in training.
Bias is one of the most significant challenges AI researchers face. Without the proper precautions, well-trained algorithms can reproduce bias that already exists in the world. For example, algorithms trained on gender-imbalanced chest X-ray data tend to be worse at identifying the signs of disease in the underrepresented sex. A skin cancer-detecting algorithm that relies primarily on data from light-skinned individuals may be ineffective at detecting anomalies in those with deeper pigment.
Healthcare algorithms are becoming increasingly popular due to the real-world benefits they can offer. AI-powered CT scan analysis can tell radiologists which scans to look at next based on the likelihood it contains evidence of diseased tissue. This helps slash wait times and improve radiologists’ analysis accuracy.
Preventive algorithms like these can reduce the overall cost of healthcare while improving the quality of a provider’s medical services. This is especially important as expenses continue to rise and it becomes increasingly common for fully funded employers to pay premiums that grow steadily every year.
As they become more commonplace, algorithms may be likely to encode bias from existing datasets into healthcare tools that appear objective. More diverse information and improved cleaning processes can help researchers manage these tendencies. If statistics are inclusive, they’re less likely to reproduce already existing healthcare biases. Effective cleaning and testing processes can help researchers catch biased datasets or algorithmic discrimination.
The data that enables these AI algorithms can also create challenges for researchers. Privacy and data protection expert Emerald de Leeuw has argued that companies will need to be ethical with their data to survive.
Informed consent and the effective de-identification of patient data will likely be necessary if researchers want to ethically gather and analyze this information.
Potential Regulations and Standards for Ethical Artificial Intelligence
Specific ethical and regulatory frameworks can help researchers translate these goals into real-world practice. For example, some researchers have argued that medical professionals who use AI should also become artificial intelligence experts. Knowledge of important AI drawbacks — the black-box algorithm problem, algorithmic bias and automation discrimination — can help practitioners understand when they should use their own judgment to second-guess results.
Others have suggested that informed consent around AI should extend beyond the collection of data. They think patients should be informed of the use of AI team members before being subject to a healthcare algorithm or intelligent medical equipment like an AI-enabled surgical robot.
At the same time, major organizations are beginning to create formal guidelines that may help researchers develop ethical artificial intelligence in healthcare. In January 2021, the World Health Organization published a report on AI in healthcare that included six guiding principles for designing and implementing medical artificial intelligence.
These principles, which include “protecting human autonomy,” “promoting human well-being and safety and the public interest,” and “ensuring inclusiveness and equity,” could help provide a rough foundation for more specific future guidelines for ethical healthcare AI.
New Ethical Guidelines May Structure the Use of Healthcare AI
Healthcare AI algorithms offer potential for medical providers and researchers. However, ethical challenges may make these algorithms much riskier in practice. Without the proper precautions, algorithms could encode existing biases into apparently objective tools or allow practitioners to leverage patients’ medical data without their explicit consent.
Researchers and AI experts are beginning to lay the groundwork for future ethical guidelines. They will help practitioners effectively apply AI in healthcare while minimizing the risk for harm it can cause.
They may also help practitioners better understand the potential challenges that the use of AI tools may pose — and how to communicate those risks to patients. That way, artificial intelligence in healthcare will better benefit everyone involved.