Skip to Main Content

How Can Artificial Intelligence Advance Medical Education and Research to Transform Patient Care?

June 11, 2024

Over the past few years, there has been tremendous advancement in artificial intelligence (AI). Faculty from across the Yale Department of Internal Medicine are using AI as a tool to help improve the way they learn, teach, conduct research, and advance the field of medicine.

“Yale has always been a leader in research and in educating the next generation of physicians and scientists,” said Gary Désir, MD, Paul B. Beeson Professor of Medicine, chair of the Department of Internal Medicine, and vice provost for Faculty Development and Diversity. “By using the power of AI, we are unlocking new opportunities to improve the way we conduct research and teach students—all with the ultimate goal of making health care better for our patients.”

These are just a few of the ways in which Yale faculty are leveraging the power of AI to transform their work.

GutGPT: Learning from a New GI Team Member

A new study in the Yale Center for Healthcare Simulation is exploring how emergency physician residents, internal medicine residents, and medical students interact with a new generative AI tool, GutGPT. GutGPT uses a specific technique called ‘retrieval augmented generation,’ which gives the generative AI tool trusted information, like clinical guidelines, pathways, and journal articles, that can be cited.

During the simulation, participants are asked to treat hypothetical patients with a gastrointestinal (GI) bleeding disorder using synthetic patient data. Each participant is randomly assigned either a dashboard or chatbot interface that allows them to ask

GutGPT questions to get the information they need quickly. The study aims to measure the usability and trust of the tool compared to other interfaces.

“Asking ChatGPT a question at home is not the same as in a medical setting,” said Dennis Shung, MD, MHS, PhD, assistant

professor of medicine (digestive diseases) and director of Applied Artificial Intelligence at the Yale Center for Healthcare Simulation, who is leading the research. “At home, you have low stakes and unlimited time. So, we need to test how clinicians will interact with this type of generative AI in a more realistic setting to understand how AI might be most useful in a clinical setting.”

The research study is bidirectional: students and residents learn how to care for patients with upper GI disorders and the capabilities and limitations of using AI in a clinical setting, while Shung and his team can learn how different team members interact with different types of AI tools.

While the study is still in its early stages, Shung says that the simulation center is helping them quickly learn that people use AI differently depending on their roles and experience levels. This information is critical to helping them refine GutGPT so that it can be more useful to each person.

“We think the biggest value is for residents, who know so much but may need reminders to have confidence in the decisions. We also see value for interdisciplinary teams, since advanced practice providers such as NPs or PAs with a lot of clinical experience are an essential part of health care systems. Having access to a tested, reliable, generative AI team member could help inform their practice,” said Shung.

While some have fears that AI may replace the role of the physician, Shung thinks a well-trained AI tool like GutGPT will only help improve care.

“Modern health care has already become a team sport because we recognize that not one person can have all the expertise. AI is just another member of the team with perfect memory who has spent a lot of time looking up a bunch of papers and is now here saying, ‘Hey! I’ve read the guidelines; on page 137, it says to do this,” said Shung. “You are still leading the team. You have to take the expertise from across the team and decide how to use it to make a clinical judgment.”

Shung and his team published early results of the study in Machine Learning for Health in 2023 and expect additional publications to follow soon.

Improving Health Equity in the Medical School Curriculum with Help from AI

Kelsey Martin, MD, assistant professor of medicine (hematology) and associate director for medical education at Women’s Health Research at Yale, Margaret Pisani, MD, MPH, professor of medicine (pulmonary, critical care, and sleep medicine) and Carolyn M. Mazure, PhD, Norma Weinberg Spungen and Joan Lebson Bildner Professor (women’s health research) are co-leading a new effort to integrate information about sex, gender, and women’s health into the medical school curriculum to better teach medical students about how sex and gender impact health and disease.

“Historically, women were largely excluded as research participants beyond studies related to reproduction, but that is improving and changing. It’s critical that we bring updated data on the impact of sex and gender on health into medical education and bridge that gap for the next generation of doctors,” said Martin.

Martin and Pisani have started working with faculty members to analyze their existing curriculum and highlight areas for updates. The challenge is that the Yale medical school curriculum is vast, and combing through the syllabi, PowerPoint lectures, clinical case workshops, and other content is time-consuming.

“Time is the biggest constraint to this type of work,” said Martin. “It takes a lot of time to analyze all of this content.”

At the suggestion of one of their team members, Haleigh Larson, a third-year medical student at Yale, they started using Humata.ai to help cull through the medical curriculum. The research team, which also includes Aeka Guru, an undergraduate fellow, developed a question list for input into Humata.ai so it could assess sex and gender content in the curriculum and generate a course report. Questions included: ‘Is there any discussion of sex and/or gender in the document?’ ‘What areas could benefit from discussion of sex and gender topics?’ ‘What journals have papers with information on these topics?’ among others. The Humata.ai tool then performed an initial content analysis to assess sex and gender content in the curriculum and develop suggestions for incorporating new information.

Once the AI tool completes its initial analysis, Martin, Pisani, and their project team review and refine the analysis, noting information like when case studies are only focused on males or missing information on patient sex and/or gender, or which topics could enrich educational content, before sharing it with the relevant faculty member.

“Using AI has made this process faster and more efficient. The feedback we’ve received so far has been very positive,” said Martin. “Many of our faculty have been teaching for a long time, and this allows them to see information in their course in a new light.” The team is also planning to get feedback from students about the existing curriculum and proposed updates. The curriculum is also a key component of the medical school’s Health Equity Thread.

After the team finishes combing through the existing curriculum, they plan to develop a tool or a process so that faculty can iteratively update their course content to reflect the latest scientific information.

“Science and medicine are changing so rapidly. As educators, we want to teach our students the most up-to-date knowledge, like how different diseases might manifest differently depending on a person’s sex or which treatment option might work best for a patient,” said Pisani. “We know that updating case studies and reframing lectures takes time. By making this process more efficient, we hope to break down some of the barriers to curriculum change.”

Martin agrees and says she’s confident that using AI to assess and update curriculum will help get insights to faculty faster.

“We’re lifelong learners in medicine. AI offers another tool we can learn from,” Martin said. “We’re just beginning to scratch the surface of how AI can be used in other aspects of medical education. There are so many more opportunities for creativity and collaboration.”

Screening the Heart with AI

Most cardiologists use technology such as electrocardiograms (ECGs), ultrasounds, or cardiac magnetic resonance imaging (MRIs) to detect and diagnose heart conditions. While these imaging tools are critical to diagnosing and treating cardiovascular disease, there is a limit to what the human eye can detect.

“We have a moving, beating heart, and while we can measure and visualize what’s happening, we can only summarize what we see to a level where we can comprehend,” said Rohan Khera, MD, MS, assistant professor of medicine (cardiovascular medicine) and director of the Cardiovascular Data Science (CarDS) Lab. “AI can look at the same video and learn what’s happening from frame to frame and beat to beat to help us understand the small and long-term changes that happen over the full cardiac cycle.”

Khera and his colleagues at the lab are working to harness the power of AI to develop state-of-the-art models that can help detect cardiovascular diseases through common imaging modalities. They recently published a paper identifying an AI-based biomarker to diagnose aortic stenosis, a common valve disorder affecting more than 2.5 million people in the United States alone. They’ve also identified an AI-based approach that would allow for the automated screening of left ventricular systolic dysfunction (LVSD) from ECG images. Currently, screening for LVSD is often unavailable or reserved for people with symptomatic disease. The work extends beyond diagnostics to defining precise and personalized care for patients with cardiovascular disease through AI deployed to clinical trials and wearable devices.

“There is so much possibility in automating the interpretation of imaging tests to detect and diagnose some of the most common heart conditions,” said Khera. “We see the opportunity for AI to help us diagnose these diseases earlier so that patients can start treatment sooner and reduce their risk of severe complications from their disease.”

While these tools are not yet integrated into clinical practice, Khera is optimistic that they will one day expand access to the highest-level cardiology care for patients, especially those in low-resource settings.

“Not every clinician has access to hundreds of expert cardiologists that they can consult to help read a complicated ECG,” said Khera. “And yet, they must still treat the patient in front of them. AI can help give clinicians a better view of the heart and help expand our capacity to deliver expert-level care in settings where it was not physically possible before.”

Using AI to Annotate a Vast Hematology Tissue Bank

The Yale Hematology Tissue Bank gives researchers access to critical patient samples for the study of hematologic diseases. The repository continues to grow exponentially and now has more than 3,300 patients enrolled and close to 9,000 individual samples, including blood, bone marrow biopsies, lymph nodes, saliva, pericardial fluid, and other various tissue biopsies.

Each sample has been painstakingly annotated with rich clinical data. However, manual abstraction of the clinical record has been inefficient, limiting the value of the samples to researchers. Applying traditional national language processing methods for automated information extraction was not completely useful, as the technology could not fully contextualize language or generate human-like languages.

“Clinical research, clinical trials, and predictive algorithms all depend on digitized and structured data so that researchers can perform the analysis required to better understand and treat blood disorders,” said Stephanie Halene, MD, Dr Med, Arthur H. and Isabel Bunker Professor of Hematology, and chief of Yale Hematology. “We have all of this incredibly rich clinical data in the Hematology Tissue Bank, but we had not found a way to extract the information in a way that was efficient and accurate.”

Halene and her team reached out to Wade Schulz, MD, PhD, assistant professor (laboratory medicine) and director of the CORE Center for Computational Health, who worked with them to build a hematology-specific large language model, a form of generative AI, behind a secure firewall to safeguard all data. Their goal is to ultimately annotate the entire tissue bank and build a hematology research database that researchers can search using natural language questions.

While they’re still in the early stages of the project, Halene is encouraged by the potential for this tool to more accurately and efficiently extract the data that researchers need for their work.

“We see this AI tool as an opportunity to enhance the value of our repository for researchers so that they have quicker access to the detailed information they need to determine if a particular sample is appropriate to their research,” Halene said. “Ultimately, we hope it will help us meet our mission to spur biomedical research to improve the care for patients with blood disorders.”

The Department of Internal Medicine at Yale is among the nation's premier departments, bringing together an elite cadre of clinicians, investigators, educators, and staff in one of the world's top medical schools. To learn more, visit Internal Medicine.