Skip to Main Content

Advancing Clinical Decision Support with Reliable, Transparent Large Language Models

September 23, 2024
by Sooyoun Tan

Qianqian Xie, PhD, awarded NIH grant to develop LLM-based framework for clinical decision support.

Qianqian Xie, PhD, postdoctoral associate at BIDS has been awarded an NIH grant to develop a cutting-edge Large Language Model (LLM)-based framework for clinical decision support. The project, titled "Reliable Question-Answering Frameworks for Clinical Decision Support using Domain-specific Large Language Models," is funded by the National Library of Medicine/NIH (Grant Number: 1K99LM014614-01), with a total budget of $88,034 for the period of 09/01/2024 – 08/31/2025.

Project Overview

The goal of this research is to create a reliable, evidence-based question-answering (QA) system powered by LLMs, specifically designed to assist clinicians in high-pressure environments like emergency departments (ED). This initiative aims to build a clinical chatbot that delivers precise, real-time information to support healthcare providers making critical decisions. The chatbot will provide clinicians with trustworthy and transparent responses, helping them navigate complex medical cases with improved confidence and accuracy.

Dr. Xie's primary mentor is Dr. Hua Xu, a prominent figure in biomedical informatics at Yale, and her co-mentor is Dr. Richard Taylor, an emergency medicine expert specializing in clinical informatics. The project also benefits from advisors and collaborators from leading institutions, including Harvard Medical School, University of Florida, and Weill Cornell Medicine.

Addressing Key Challenges in Clinical AI

Two significant barriers currently prevent the widespread adoption of LLMs in clinical settings: reliability and transparency. Traditional LLMs may generate inaccurate medical information, leading to potentially dangerous misdiagnoses or therapeutic errors. Additionally, their responses often lack transparency, providing answers without adequate reasoning or source references.

This project tackles these critical issues by developing domain-specific medical LLMs, dubbed "CliniGPT." Through pre-training and fine-tuning on large-scale clinical data, including electronic health records (EHRs), and integrating high-quality external knowledge sources like clinical guidelines, CliniGPT will improve its comprehension of clinical nuances, reducing errors and increasing trustworthiness.

Novel Approaches and Outcomes

The innovation behind this project lies in its retrieval-augmented generation (RAG) approach, which ensures the integration of the latest medical knowledge into the LLM responses. The system will also provide transparency by citing sources and allowing clinicians to verify information in real-time. When deployed as a clinical chatbot in the ED, this system is designed to assist in diagnosing and treating common emergency complaints like chest pain, headaches, and abdominal pain.

Dr. Xie's team aims to deliver several impactful outcomes, including:

  • An open-source clinical LLM tailored for real-time clinical decision-making.
  • A transparent QA framework for reliable evidence-based responses.
  • A user-centered chatbot for emergency care environments.
  • A framework for integrating multimodal clinical datasets to enhance system knowledge.

Broader Impact

This groundbreaking research has the potential to revolutionize clinical decision-making, particularly in fast-paced, high-stakes environments. By addressing reliability and transparency issues in clinical AI tools, Dr. Xie’s work will empower clinicians to make more informed decisions, ultimately leading to better patient outcomes.

We are excited to develop advanced medical large language models and question-answering methods that will provide clinicians with reliable, real-time, and evidence-based information. This work has the potential to transform clinical decision-making and significantly improve patient outcomes.

Qianqian Xie, PhD