Skip to Main Content

NIH Awards $1.5 Million Grant to Improve Factual Correctness in Large Language Models in Health Care

October 02, 2024
by Sooyoun Tan

The National Institutes of Health (NIH) has awarded $1.5 million to a project that addresses the critical issues of factual inaccuracy and unfaithful reasoning in Large Language Models (LLMs) for biomedicine and health care. The project, led by principal investigator Qingyu Chen, PhD, is funded by the National Library of Medicine under grant number 1R01LM014604-01, with a project period extending from August 2024 to July 2028. As AI continues to revolutionize health care, ensuring the reliability of its outputs is crucial.

Improving factual accuracy and reasoning in domain-specific LLMs is crucial, as it will benefit a range of downstream applications in the biomedical and health domains.

Qingyu Chen, PhD

LLMs hold transformative potential across numerous fields, including health care. However, their susceptibility to generating inaccurate or misleading information poses significant risks, especially in medical contexts, where misinformation could result in misdiagnosis or improper treatment. Dr. Chen’s project aims to address these concerns by developing a robust framework that improves the factual accuracy and reasoning of LLM-generated responses in biomedical applications.

Project Overview

The research initiative systematically tackles the challenges LLMs face in generating reliable outputs for biomedicine. Key strategies include:

  • Establishing a Biomedical Digital Resource Framework: This framework will provide LLMs with up-to-date biomedical knowledge, enhancing their ability to deliver trustworthy information.
  • Developing New Natural Language Processing Techniques: These techniques will facilitate automatic fact-checking to improve the correctness of LLM responses.
  • Implementing a Feedback-Guided Paradigm: This system will enable LLMs to identify and correct errors, refining their outputs over time.

The project involves an interdisciplinary team from Yale including Hua Xu, PhD; Andrew Taylor, MD, MHS; Cynthia Brandt, MD, MPH; Arman Cohan, PhD; and Ron Adelman, MD, MPH, MBA, FACS who all bring expertise in AI and biomedical informatics.

Broader Impact

By enhancing the reliability and accuracy of LLMs in health care, this project aims to empower health care professionals with more reliable diagnostic and treatment recommendations, thereby minimizing misinformation risks. Beyond its impact on clinical care, the research will refine the development and evaluation processes of LLMs in biomedicine, providing a roadmap for future AI innovations that prioritize accuracy and trustworthiness.