The evolution of artificial intelligence (AI) is set to transform patient care.
Today, when you get a rash, you might call a dermatologist, only to find that the next available appointment is weeks away. When you finally get in, the doctor relies on their own eyes and brain to look for patterns and arrive at a diagnosis. They write you a prescription and send you on your way.
The evolution of artificial intelligence (AI) is about to transform this process—providing patients better care faster. Soon, when you get a rash, you might be able to simply upload a picture onto your computer. From there, AI-assisted technology will use pattern recognition that is more likely to provide an accurate diagnosis than human senses alone. You are immediately prescribed the appropriate treatment that previously would have taken weeks to receive.
As AI continues to evolve at a rapid rate, doctors at Yale School of Medicine (YSM) are already implementing some of these clinical applications, while others are just over the horizon. Both in the clinic and behind the scenes, AI is about to permanently transform the way medicine operates, YSM experts say.
“We’re on the cusp of a moment that will shift everything forever,” says Harlan Krumholz, MD, SM, Harold H. Hines, Jr. Professor of Medicine (Cardiology). “It’s as if we’re just discovering microbes and the underlying cause of disease for the first time. AI is just as much a transformative milestone in medicine, and maybe more so.”
AI behind the scenes
Many of the clinical AI applications both in use today and soon to be introduced are ones that patients will never see. Radiologists were among the earliest adopters of AI. The technology helps biomedical imaging technology become smarter and faster in acquiring, reconstructing, and interpreting images. “The images interpreted by radiologists make a really nice dataset for AI to leverage,” says Lee H. Schwamm, MD, associate dean for digital strategy and transformation and chief digital health officer, Yale New Haven Health System. “This is an opportunity for the machine to recognize patterns or signatures that the human eye can’t recognize.”
Radiologists have actually been using AI for decades—as far back as the 1990s and 2000s, says Melissa Davis, MD, MBA, associate professor of radiology and biomedical imaging. Back then, they used computer-aided detection (CAD) to help detect such abnormal features as lesions. “This was basically the early stages of AI,” she says.
Over the last five years, there has been a push within radiology for even greater incorporation of AI technologies. This is especially true for technology that assists with triaging—or prioritizing radiology cases on the basis of urgency. A patient may come into the emergency room, for example, in need of a CT scan after hitting their head. Before a doctor even looks at the scan, AI can detect and flag the odds of such risks as internal bleeding. “It’s not always right, but it allows us to have some sort of triage capacity for patients on our list who have acute issues, so we know to prioritize your scan,” Davis says.
Pathology is another field undergoing an exciting transformation. The specialty has traditionally relied on manual (human) analysis of tissue samples. Now, AI can analyze a digital slide of abnormal tissue and search for patterns, potentially generating earlier and more accurate diagnoses. For example, one study at Stanford Medicine published in Nature Communications found that AI is capable of predicting the presence of genetic mutations that are likely fueling a cancer based on the appearance of the tumor biopsy.
Chen Liu, MD, PhD, Anthony N. Brady Professor of Pathology, is leading the effort to bring YSM into a digital pathology revolution. He is especially focused on using AI to identify digital biomarkers—in other words, using AI to find markers of disease. “Human tissue contains a lot of information,” he says. “AI has the potential to mine huge amounts of data from human tissue to look for different diseases.”
Researchers at YSM are also learning how to use AI to help identify patients who might be eligible for clinical trials—a task that is currently complicated and labor intensive. Pamela Kunz, MD, professor of internal medicine (medical oncology), is part of a team that is developing a tool to help identify patients for clinical trials accurately and efficiently. “This is going to revolutionize how we find patients,” says Kunz. “The work of doing clinical trials can be very time-consuming, and if there’s any way we can speed up that work, that’s going to help us get important drugs to patients sooner.”
Kunz also hopes this work will help ensure that clinical trials are more representative of the populations her team studies by flagging patients who may have been missed by conventional methods. “The more diverse our study population is and the more it matches our communities, the more we’ll know about how the drug will perform in the real world,” Kunz says.
Hospitals across the country are deploying AI models that identify which patients are at high risk of clinical deterioration—in which they experience a sudden or serious decline in their condition. For instance, a patient admitted to the hospital for gallstone surgery could later suffer a cardiac arrest or develop a severe infection. “For most patients, there are markers of that impending deterioration that can be very subtle at first,” Schwamm says. “So by the time you recognize that there’s a problem, it’s too late for drugs and interventions to be helpful.”
Last year, Yale New Haven Health System introduced eCART, an AI deterioration index developed by researchers at the University of Chicago. While these models are still new, and it will take time to demonstrate whether they will improve clinical outcomes, such AI capabilities will become more readily available as the technology continues to advance.
AI in the clinic
AI is radically improving patient experiences in the clinic. Abridge, for example, is an AI-powered ambient clinical documentation tool that Yale Medicine and Yale New Haven Health rolled out in April. It works as an AI scribe that records the interaction between doctor and patient, analyzes the key components of that conversation and physical examination, and reconstructs it into the format required for a clinical note—which once reviewed, edited, and approved by the clinician, goes into the patient’s medical record. Abridge allows providers to be more present with their patients and have more meaningful encounters because they don’t need to take extensive notes during the appointment.
“It’s less critical for me to type into the computer every last detail while I’m seeing the patient,” says Michael Karellas, MD, assistant professor of urology. “I don’t have to spend as much time facing the computer and am able to face the person I’m treating.” Patient data collected by Abridge are kept safe and confidential, and Karellas says all patients have the opportunity to opt out of being recorded.
Furthermore, Yale New Haven Hospital is one of a handful of health systems investing in a pilot initiative in which they will be wiring 450 beds with ambient AI video technology. This platform will “allow our nursing leaders to reimagine the nursing care model,” says Schwamm. The “smart beds” will also bring the right care to the right patient in the right amount of time by removing geographic constraints. It allows nurses, physicians, pharmacists, interpreters, or anyone else whom the patient needs to be right at their bedside through telehealth.
The beds also will help protect patients. For example, high-risk patients—such as individuals prone to falling—often require a patient observer to sit in their room to ensure safety. Using advanced AI algorithms, the beds can detect various risks and promptly alert staff. This feature allows one observer to keep an eye on multiple patients at a time.
Over time, Schwamm says, the ambient video technology has the potential to recognize visual patterns that are precursors of concerning behavior. Delirium, for example, is a significant problem for hospitalized patients that can also increase their length of stay and risk of death. “If you could train an algorithm to recognize the signs of delirium before they happen, we have the opportunity to intervene and prevent harm,” Schwamm says.
These ambient vision algorithms could also be trained to recognize and document various activities—like emptying the urine catheter container or turning the patient in bed—into the medical record for a nurse or provider to edit and sign off on. “Think about it as sort of a virtual assistant who is always there and always processing,” Schwamm says. “It’s like a nanny cam on steroids.”
Out in the real world
Over the last decade, the use of such wearable biosensors as smartwatches has risen in popularity. These devices have become increasingly sophisticated, with some having the ability to track motion, heart rate, sleep, and more. Researchers are interested in accessing the massive amounts of data these devices collect, and using AI to recognize patterns associated with various conditions, including strokes, mood disorders, and more.
Among them is Mark Gerstein, PhD, Albert L Williams Professor of Biomedical Informatics and professor of molecular biophysics and biochemistry, of computer science, and of statistics and data science. In one recent study, his team used smartwatch data from over 2,000 adolescents to train AI models to predict whether an individual has such psychiatric conditions as attention deficit hyperactivity disorder (ADHD) and anxiety. The measurements taken by the watches included heart rate, calorie expenditure, activity intensity, number of steps taken, and sleep quality.
The team found that heart rate was the most useful measure for predicting ADHD. Youth with the disorder often experience episodes of heightened arousal—in other words, they may experience more intense emotional responses like excitement or anger compared to those of their neurotypical peers. These episodes could be reflected in the individuals’ heart rates. Meanwhile, quality of sleep was the most significant predictor of anxiety—individuals with anxiety disorder tend to suffer from disrupted sleep patterns. The study points to how wearable sensors could help reshape psychiatry by providing new diagnostic tools.
Brain disorders such as ADHD and anxiety are heritable—in other words, a person’s genetic makeup is predictive of whether they will develop a particular disease. So, Gerstein’s team also studied whether smartwatch data could help identify genetic factors linked to psychiatric illness. Using smartwatch and genetic data from a subset of individuals with ADHD and healthy controls, they identified 26 genes associated with the ADHD cohort. For example, they found an association between heart rate patterns in the ADHD group and a variant of the MYH6 gene, which encodes an important protein in cardiac muscle.
This finding highlights how wearable sensors could help clinicians better understand the underlying mechanisms of neurological conditions. “Brain diseases like Alzheimer’s disease, Parkinson’s disease, schizophrenia, and so on are major issues,” Gerstein says. “This research is a promising direction to help us manage brain and behavioral disorders.”
As AI is becoming increasingly prevalent in both the clinic and at large, concerns are also rising about patient privacy and compliance. “Keeping patient data safe and private should be everyone’s concern, and health system leaders and providers need to take extra steps to ensure that data is either fully de-identified or that it never leaves the health system data ecosystem,” Schwamm says. These tools should always have human oversight, and providers should be careful about the ways that data are accessed, stored, and moved. Through taking such proper precautions as using anonymized datasets, clinicians can engage with AI in ways that keep patients safe. At YSM, Schwamm says, “We have very rigorous processes to review and establish the security, the privacy, and the appropriateness of use of AI.”
The future of AI
Further advances in AI could have exciting applications for the future, YSM experts say. The ability to talk to a machine that can understand a question and quickly respond by summarizing information it gathered from thousands or millions of documents is unprecedented, says Lucila Ohno-Machado, MD, PhD, MBA, Waldemar von Zedtwitz Professor of Medicine and Biomedical Informatics and Data Science (BIDS), and chair of BIDS. When asked about what excites her the most regarding AI research, she says that it is the long-term potential to create robotic caretakers that can assist vulnerable patients when human help is not available. “The ability of an elderly patient to stay at home and tell a robot what they need or to have a robot to sense that they need help—this type of AI is what we are all working toward,” she says.
The future of wearables is also ripe for innovation, says Krumholz. Scientists could potentially combine wearable biosensors with ambient listening technology to create AI assistants that work alongside a doctor in real time as they see patients. “You may have a plug-in in your ear that can give you guidance and clinical decision support,” he says. “Or maybe something similar to Meta’s smart glasses that helps you more easily spot an indicator of a disease that you wouldn’t normally be able to with a patient.” All of these technologies can help further augment clinicians’ abilities to diagnose and treat various conditions.
The evolution of these tools could also help health care workers provide care that would ordinarily require higher levels of training. This development could help bring medicine to remote areas that currently lack access to adequate specialized clinical care. Many village doctors in rural areas of China, for example, have only a high school level of education. With AI assistance, “these individuals will likely be able to work at the levels of nurse practitioners and PAs,” Krumholz says. “These assists are going to be like an expert on your shoulder that elevates your performance beyond what might have been expected if you were just working by yourself.”
He is looking forward to the transformation AI will bring to health care. “It’s my hope that AI will usher in a time where we will be able to meet people where they live, to be able to provide care that’s precise for their needs, and to be able to ensure that all we know in medicine is available to them,” he says. “And I think that’s what this new era is going to be about.”