Artificial intelligence (AI) is revolutionizing the way clinicians make decisions about patient care. But health care algorithms that power AI may include bias against underrepresented communities and thus amplify existing racial inequality in medicine, according to a growing body of evidence.
To address this rapidly growing problem, the Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) recently convened a diverse panel of experts, co-chaired by Lucila Ohno-Machado, MD, PhD, MBA, Waldemar von Zedtwitz Professor of Medicine and deputy dean for biomedical informatics at Yale School of Medicine.
The panel identified core guiding principles for eliminating algorithmic bias. Its work is aligned with President Biden’s “Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,” issued last February. The panel published its conceptual framework in JAMA Network Open on December 15.
“Many health care algorithms are data-driven, but if the data aren’t representative of the full population, it can create biases against those who are less represented,” says Ohno-Machado. The biases arise from incorrect assumptions about particular patient populations and can result in inappropriate care. “As the use of new AI techniques grows and grows, it will be important to watch out for these biases to make sure we do no harm to specific groups while advancing health for others. We need to develop strategies for AI to advance health for all.”
Health care algorithms are mathematical models that support clinicians as well as administrators in decision making about patient care. But biased AI is already harming minoritized communities. Experts have identified numerous biased algorithms that require racial or ethnic minorities to be considerably more ill than their white counterparts to receive the same diagnosis, treatment, or resources. These include models developed across a wide range of specialties, such as for cardiac surgery, kidney transplantation, and more.
Panel identifies five key principles for mitigating algorithmic bias
The panel, which included nine stakeholders from diverse backgrounds, analyzed existing evidence and also received feedback from thought leaders and community representatives. Mitigating algorithmic bias, the panel determined, must take place across all stages of an algorithm’s life cycle. The experts defined this life cycle in five stages:
- Identification of the problem that the algorithm will address
- Selection and management of data to be used by the algorithm
- Development, training, and validation of the algorithm
- Deployment of the algorithm
- Ongoing evaluation of performance and outcomes of the algorithm
Next, the panel came up with five guiding principles for preventing algorithmic bias:
- Promote health and health care equity during all phases of the health care algorithm life cycle
- Ensure that health care algorithms and their use are transparent and explainable
- Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trust
- Explicitly identify health care algorithmic fairness issues and tradeoffs
- Ensure accountability for equity and fairness in outcomes from health care algorithms
As clinicians adopt more AI techniques, it is their obligation to take steps to eliminate biases that are harmful to their underrepresented patients, says Ohno-Machado. “Health care workers should be confident that the algorithms they use are fair and promote health equity.”
Ohno-Machado and her colleagues are now planning to host workshops and trainings on ethical artificial intelligence usage to help shape the future of the technology.
Marshall Chin, MD, Richard Parillo Family Distinguished Service Professor of Health Care Ethics at the University of Chicago, was co-chair of the panel.