AS OPENAI’S CHATGPT, GOOGLE’S BARD, and other artificial intelligence (AI) platforms race to dominate the marketplace, industries from finance and banking to auto manufacturing and media are assessing the impact of what is arguably the most transformative development of the 21st century. When it comes to the field of medicine, the stakes are as high as they come.
Will medical AI usher in a utopia of more precise, life-saving treatments with fewer medical errors—or a dystopia of algorithm-driven medicine that sidelines doctors, undermines quality of care, and defies common sense.
To address such questions, Yale School of Medicine (YSM) experts attended a medical AI forum at Connecticut’s Capitol this summer. Representatives from the U.S. Department of Health and Human Services, the Massachusetts Institute of Technology, and Harvard’s T.H. Chan School of Public Health also participated in the event.
Speaking to Yale Medicine Magazine after the event, the YSM experts called for hard guardrails to ensure that AI medicine doesn’t veer off into dangerous directions. These cautions might include strong regulatory controls; careful review and testing of models before they are widely used; and a thorough understanding of what algorithms can and can’t do.
These experts, along with others at Yale who are on the cutting edge of medical AI, believe the promise of medical AI is enormous, but so are the potential pitfalls. These include leakage and misuse of sensitive medical data; bias inadvertently built into AI models; medical insurance discrimination; algorithms gone haywire; and blackbox treatment models whose underlying reasoning no one fully understands.
“Innovation in technology is always a good thing for us to be experiencing,” said Manisha Juthani, MD, who is on leave from her YSM professorship of medicine (infectious diseases) while serving as Connecticut’s commissioner of public health. “Physicians could potentially benefit from this, and if we work together, we could potentially leverage it to be something useful. Where I have my radar up, and what I want to be aware of, is that it is used for good—and that it is shown to be better than what we do now.”
Juthani’s final thought is perhaps the biggest question mark hanging over the nascent AI revolution: Will it actually improve care?
For example, AI can already do a better job of predicting patient outcomes than most physicians, said YSM Associate Professor F. Perry Wilson, MD, MSCE, who has studied informatics and AI in medicine for nearly a decade. “Our ability to prognosticate is way better than it’s ever been before,” he said. “AI can already out-prognosticate doctors like myself.” But whether that information will actually lead to improved care is an open question. “Just because I know a patient is more likely to develop a certain type of cancer does not mean that I can prevent it,” Wilson said.
That is one of many reasons why AI models should be subject to the same level of regulatory scrutiny and approval processes that we require for new drugs and medical devices, Wilson and other experts say.
The regulation hurdle
That’s music to the ears of U.S. Sen. Richard Blumenthal, D-Conn., JD ’73, who also participated in the forum and has made regulation of fast-emerging AI technology a signature issue. He agreed that medical AI models and treatments should undergo the same level of scrutiny as pharmaceuticals and medical devices. While some have suggested that the Food and Drug Administration could take on that additional role, Blumenthal leans toward creating a new agency entirely devoted to AI regulation.
“I think there ought to be some sort of entity, some governing body, a government agency perhaps modeled on the FDA,” he said. “We need an entirely separate expertise.”
Speaking from his perspective as an attorney, Blumenthal said that AI could produce a pretty good legal brief, but he’d want to read it carefully and make any needed corrections before submitting it to a court. Medical AI needs to be far better than that, he said, which is another reason why regulation and thorough testing are needed before AI models are put into widespread use, he said.
“In life-or-death situations, you don’t want it right nine out of 10 times,” he said. “You want it right 10 out of 10 times.”
But getting it “right” in medical AI is not always as straightforward as it sounds, said Mark Gerstein, PhD, YSM’s Albert L Williams Professor of Biomedical Informatics. That’s because—in contrast to less risky uses of AI such as managing inventory or mining ad data—medical AI models are often “black boxes”; we don’t fully understand how they reach their conclusions and make their recommendations, Gerstein said.
“If you have a medical issue, and it (the AI model) says, ‘Cut your arm off,’ you want to be able to understand how it came to its conclusion,” he said.
Opening those black boxes and determining what’s inside and how it works will be a vital job for regulators, he said. To address the problem, builders of medical AI also must incorporate scientific principles into their treatment and diagnosis models.
“One thing that comes up in medicine that makes it special compared to, for example, supply-chain mining, is that medicine is grounded in biochemistry, physiology, and natural laws,” Gerstein said. “We want our models to be understanding underlying biomedical theory.”
That also means that doctors cannot and should not be sidelined, experts say. While some specialties like radiology—Wilson says AI is already on the cusp of reading medical images better than any human—may soon be in less demand, physician training, observations, and judgment must remain at the center of medicine, they said. Instead of supplanting doctors, AI should assist them, becoming yet another tool in their toolbox.
“Your doctor has more information at their disposal than AI ever will,” Wilson said. “It can’t look at a patient sitting in a clinic room and pick up on the set of the eyes, or the dynamics of their facial expressions, or the cadence of their conversation. There’s all this data at their fingertips that doctors are exquisitely tuned to. Doctors can use that along with AI.”
An eye on privacy
Patient privacy is another major concern, said Wilson and others. The huge amount of data that is collected and fed into AI programs creates a myriad of opportunities for leakage and misuse, he noted. In addition to regulation, Wilson called on Congress to pass legislation prohibiting discrimination in insurance and other areas based on the predictions and conclusions of medical AI. An existing law banning discrimination based on a person’s genome provides an excellent model, Wilson said.
“I personally think we should go beyond that and provide some protections to prevent the broad-scale harvesting of personal data without explicit consent,” he said. “Consumers need to know what information an insurance company, government, or others who might seek to profit from the data are using and what their data sources are.”