Members of the Image Processing and Analysis Group in the Department of Radiology & Biomedical Imaging have received a Best Paper award for the second year in a row during the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2020).
Graduate student and first author Junlin Yang said he has been working on the issue of domain generalization with a group of Yale collaborators for more than a year. “We had this idea initially when we were heading to China for the MICCAI 2019 conference in Shenzhen, China,” Yang said.
The paper, “Cross-Modality Segmentation by Self-Supervised Semantic Alignment in Disentangled Content Space,” describes a new approach to automatically and accurately segment an anatomical object, such as the liver, in one type of medical image such as magnetic resonance imaging (MRI) or computed tomography (CT). The approach uses a data-driven, deep learning system.
The paper’s contributors are graduate students Xiaoxiao Li and Daniel Pak, Assistant Professor Nicha C. Dvornek, PhD, Julius Chapiro, MD, MingDe Lin, PhD, and Professor James S. Duncan, PhD. The group received the award during the International Workshop on Domain Adaptation and Representation Transfer (DART), a satellite event held in conjunction with MICCAI 2020, which was held virtually in Lima, Peru in early October.
Professor Duncan, who is Yang’s mentor, said the system described in the paper is especially useful when integrating large datasets of subjects from multiple hospitals and timeframes for quantitative analysis of cancer and other diseases.
“When trying to automatically localize or segment the liver (and ultimately lesions) in patients when only one data type or another is available (i.e. either CT or MR but not both), this system can accurately address the task in a robust manner that performs as if both types of images were available,” Duncan said.
Last year’s award was for a paper that described a unique approach for classifying individuals with autism spectrum disorder (ASD), and also provided robust representations of brain activity that can help interpret which regions of the brain most relate to autism. Titled, “Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI,” it was named Best Paper at the 10th International Workshop on Machine Learning in Medical Imaging (MLMI) during the 2019 MICCAI conference. Assistant Professor Dvornek was the paper’s first author.