Skip to Main Content

New Center to Harness Big Data in HealthCare

Medicine@Yale, 2019 - Feb March


Artificial intelligence and machine learning are already shaping research and patient care at Yale School of Medicine. Some of the doctors who rely on it explain how these technologies benefit both patients and science.

For centuries, machines and medical implements have been used by physicians as teaching tools—ways to image the body and practice procedures to treat it. But now physicians are teaching the machines to practice medicine themselves.

Artificial intelligence (AI) and machine learning make medicine precise and personalized medicine possible. According to Yale physicians and researchers, the new technologies allow for earlier and more effective diagnoses, treatments, and preventive measures. Patients get better, cheaper, and faster care, while doctors are freed from certain mundane tasks and able to tap vast troves of data and experience to assist with treatment and diagnosis.

“I think this is one of the most exciting moments in the history of medicine,” said Harlan Krumholz, MD, SM, the Harold H. Hines, Jr. Professor of Medicine (Cardiology), professor in the Institute for Social and Policy Studies, and director of Yale New Haven Hospital’s Center for Outcomes Research and Evaluation (CORE), which is deeply involved in AI. “We’re about to have an immense increase of capability that we’ve never had before.” But, added Krumholz and others, that major increase in capability will come with immense responsibility.

“You’ve got to go slowly with this,” said Lawrence Staib, PhD ’90, professor of radiology and biomedical imaging, of biomedical engineering and of electrical engineering, who is working on AI in medical imaging. “It’s true of all medicine. It’s like you’re testing out a new vaccine. You’ve got to be sure it’s safe and effective.”

AI and Machine Learning: A Partnership Between Clinicians and Machines

The aim isn’t to replace human physicians; it’s a partnership between them and AI that provides for better outcomes and eases workflows. “We’re not trying to develop super-smart AI to replace human cognition,” said Nicholas Christakis, MD, MPH, PhD, the Sterling Professor of Social and Natural Science, Internal Medicine & Biomedical Engineering, who is studying how humans interact with AI. “We are developing dumb AI to supplement human interactions.”

Two factors are driving the boom in medical AI and machine learning. One is the emergence in the last decade of so-called [2] big data—huge sets of patients’ medical information and case histories that can be analyzed for patterns and insights and harnessed to teach a computer a skill. The second is the seemingly mundane but consequential advances in machine learning. A foundational development is the ability of an AI algorithm to recognize ever more complex objects, including what is a dog vs. what is a cat, Staib said. That sounds trivial, but constitutes a major breakthrough.

“Dogs can be all sorts of different shapes and sizes,” he said. “Certain things are super-easy for us and difficult for computers and vice versa.”

Balancing Optimism with Caution in Medical Imaging

This new recognition capability opens the door to programs trained to read medical scans—X-rays, [3], CT scans—which is one of the areas where medical AI shows the greatest initial promise. The goal is to create programs that learn to read patients’ scans by analyzing thousands of them and their accompanying case histories.

“The idea is to build algorithms that have the entire Yale experience,” said Sanjay Aneja, MD ’13, assistant professor of therapeutic radiology. “The idea is to predict outcomes better.”

Other potential applications range from mapping and classifying tumors to lightning-fast ways to pinpoint damage in a stroke patient’s brain, to [4] analyzing brain scans of individuals suffering from various neurological conditions in order to better understand and treat their disorders.

One promising prospective application, for example, is employing AI to immediately determine breast density in mammograms. If additional scans are needed, they can be done right away instead of having the patient return after a doctor has read the images.

Challenges remain before scan-reading algorithms are ready for general use. One significant issue is that researchers don’t always fully understand how an AI program reads a scan—that is, what the algorithm focuses on to make its determination. There must be such safeguards as checking which part of the image the computer is reading to reach its conclusion, as a way to ensure that the program isn’t misinterpreting the scan.

Other problems: medical imaging machines made by different companies sometimes confuse the algorithm, as can also happen if a patient moves during the scan or is not in the exact right position. The key to resolving these issues is feeding ever more data into the program so that it keeps learning.

Advancing Health Care Through Predictive Models

In fields like cancer prediction and prevention, plastic surgery, and mental health, AI is already in use or showing great potential. Jun Deng, PhD, professor of therapeutic radiology, has applied machine learning to a data set of 155,000 participants’ records to predict the likelihood of individuals developing cancer. His work has created highly accurate predictive models for 16 types of cancer for women and 15 types for men. The resulting predictions are on average 94% accurate, depending on the kind of cancer, reaching as high as 99% for colorectal cancer.

That information makes more precise and individualized treatment possible. “What we mean by precise medicine is that we treat the right patient with the right treatment at the right time,” Deng said. “I want to move the battlefield to the early stage, not the treatment stage.” Based on these predictive models, doctors will be able to order such additional tests as blood or urine analyses and take preventive actions before the disease manifests itself outwardly.

In some areas, like plastic and corrective surgery, basic AI is already well established. Derek Steinbacher, DMD, MD, the Yale School of Medicine’s chief of oral and maxillofacial surgery and dentistry, uses AI to better plan and carry out surgeries. Machine learning technology allows plastic surgeons and their patients to map changes on a computer screen to see how they look.

“It gives you an opportunity to see things you otherwise would not see—almost like X-ray vision where you can see through statues,” said Steinbacher. Surgeons who until relatively recently estimated the point at which to rehang a jaw or make other adjustments can now turn to an AI program for a precise location.

Mental health is also an AI early adapter. Adam Chekroud, PhD ’18, assistant professor adjunct of psychiatry at the medical school, runs a New York City-based mental health startup called Spring Hill that has put AI to use in diagnosing and determining treatment options for mental illness.

The firm has leveraged massive amounts of data to create algorithms that recommend treatment for patients based on their answers to a series of survey questions. The firm has a wide range of clients who deploy its algorithm in a form or an app that their employees use to find out whether they are depressed—and if so, the severity of their condition and what type of care they need.

While many Yale AI researchers are working within the confines of certain specialties, Krumholz is looking at the big picture. His vision, which is in the early developmental stages, is “a pipeline” incorporating AI and machine learning into every aspect and stage of medicine. “We’re trying to create a system that gets smarter with every patient,” he said.

Originally published 2021 (issue 166); updated May 11, 2022.

Previous Article
Promising path seen to a malaria vaccine
Next Article
Patient’s bequest supports research by her physician