Guiding Human Involvement: Navigating EthicalAI in Medical Practice
In the rapidly evolving landscape of artificial intelligence (AI) in healthcare, a significant emphasis is being placed on the ethical development and use of AI medical devices. A task force, comprising experts from the Society for Nuclear Medicine and Medical Imaging, has published recommendations to ensure transparency, reduce health inequities, and avoid bias in AI training data.
To achieve transparency, strong governance frameworks are essential. These frameworks should provide clear information to patients and clinicians about how data is collected, used, and protected in AI development. AI systems should be designed with interpretable and explainable models, allowing medical professionals to understand AI reasoning and decisions, promoting accountability. Privacy-preserving technologies, such as homomorphic encryption and secure multi-party computation, should be employed to ensure confidentiality of sensitive health data during AI training and deployment. "Secure-by-design" principles and robust cybersecurity measures are crucial to protect AI systems and patient data from breaches.
Mitigating bias and promoting fairness is another key aspect. AI developers should use diverse, representative, and high-quality datasets that reflect the full spectrum of patient populations. Fairness-aware training protocols and bias mitigation methods should be explicitly designed to detect and correct for biases affecting protected groups. Regular audits of AI outputs for disparate impacts and improper treatment risks are necessary, with adjustments made to algorithms to avoid unjust discrimination. Ongoing evaluation and validation in clinical settings, involving healthcare professionals, are essential to assess AI performance across diverse patient populations and detect bias early.
Reducing health inequities requires addressing systemic factors contributing to data disparities, promoting inclusive data governance, and aligning AI development with ethical and regulatory standards to ensure equitable access and benefit distribution.
Doctors must understand how a given AI medical device is intended to be used, how well it performs at that task, and any limitations. AI medical devices should be tested in "silent trials" to evaluate their performance on real patients in real time. Developers should build alerts into their devices or systems to inform users about the degree of uncertainty of the AI's predictions. AI medical devices must be useful and accurate in all contexts of deployment, and to avoid deepening health inequities, models must be calibrated for all racial and gender groups using diverse datasets.
The task force's recommendations, first published in the context of nuclear medicine and medical imaging, can be applied broadly. The task force published these recommendations in two papers: "Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance" and "Ethical Considerations for Artificial Intelligence in Medical Imaging: Data Collection, Development, and Evaluation".
Jonathan Herington, PhD, a member of the AI Task Force, emphasizes the need for a solidified ethical and regulatory framework around AI medical devices due to their rapidly evolving landscape. He notes a concern that high-tech, expensive AI systems may not be accessible to under-resourced or rural hospitals. There is also a concern that AI medical devices are currently being trained on datasets with underrepresentation of Latino and Black patients, making them less accurate for these groups. The task force has called for increased transparency about the accuracy and limits of AI medical devices and outlined ways to ensure all people have access to AI medical devices, regardless of their race, ethnicity, gender, or wealth.
References:
[1] Ethical Considerations for Artificial Intelligence in Medical Imaging: Data Collection, Development, and Evaluation. Journal of Nuclear Medicine, 2021.
[2] Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance. Journal of Nuclear Medicine, 2021.
[3] Herington, J. (2021). Bias in AI: An emerging challenge in healthcare. Journal of Medical Imaging and Radiation Sciences, 50(1), 7-12.
[4] Grewen, K. (2020). The ethical implications of AI in healthcare. Journal of Medical Ethics, 46(4), 249-254.
[5] Tene, O., & Polonetsky, J. (2020). The ethics of AI in healthcare: A framework for decision making. Nature Medicine, 26(6), 719-721.
Technology plays a crucial role in the development and use of AI medical devices, such as the incorporation of privacy-preserving technologies to protect sensitive health data. However, it is equally important to ensure patient care and ethical considerations are prioritized, like transparent governance frameworks, diverse datasets, and fairness-aware training protocols to avoid bias.
In the task force's recommendations, they advocate for the use of technology while also emphasizing the need for explaining AI reasoning and decisions to medical professionals, promoting accountability and ensuring equitable access to AI medical devices for all people regardless of their race, ethnicity, gender, or wealth.