Researchers at the Yale Cardiovascular Data Science (CarDS) Lab have developed an artificial intelligence (AI)-based model for clinical diagnosis that can use electrocardiogram (ECG) images, regardless of format or layout, to diagnose multiple heart rhythm and conduction disorders.
The team led by Rohan Khera MD, MS, assistant professor in cardiovascular medicine, developed a novel multilabel automated diagnosis model from ECG images. ECG Dx © is the latest tool from the CarDS Lab designed to make AI-based ECG interpretation accessible in remote settings. They hope the new technology provides an improved method to diagnose key cardiac disorders. The findings were published in Nature Communications on March 24.
The first author of the study is Veer Sangha, a computer science major at Yale College. “Our study suggests that image and signal models performed comparably for clinical labels on multiple datasets,” said Sangha. “Our approach could expand the applications of artificial intelligence to clinical care targeting increasingly complex challenges.”
As mobile technology improves, patients increasingly have access to ECG images, which raises new questions about how to incorporate these devices in patient care. Under Khera’s mentorship, Sangha’s research at the CarDS Lab analyzes multi-modal inputs from electronic health records to design potential solutions.
The model is based on data collected from more than 2 million ECGs from 1,506,112 patients who received care in Brazil from 2010-2017. One in six patients was diagnosed with rhythm disorders. The tool was independently validated through multiple international data sources, with high accuracy for clinical diagnosis from ECGs.
Machine learning (ML) approaches, specifically those using deep learning, have transformed automated diagnostic decision-making. For ECGs, they have led to the development of tools that allow clinicians to find hidden or complex patterns. However, deep learning tools use signal-based models, which according to Khera have not been optimized for remote health care settings. Image-based models may offer improvement in the automated diagnosis from ECGs.
There are a number of clinical and technical challenges when using AI-based applications.
“Current AI tools rely on raw electrocardiographic signals instead of stored images, which are far more common as ECGs are often printed and scanned as images. Also, many AI-based diagnostic tools are designed for individual clinical disorders, and therefore, may have limited utility in a clinical setting where multiple ECG abnormalities co-occur,” said Khera. “A key advance is that the technology is designed to be smart - it is not dependent on specific ECG layouts and can adapt to existing variations and new layouts. In that respect, it can perform like expert human readers, identifying multiple clinical diagnoses across different formats of printed ECGs that vary across hospitals and countries.”
This study was supported by research funding from the National Heart, Lung, and Blood Institute of the National Institutes of Health (K23HL153775).