CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning
Du Y, Chang B, Dvornek N. CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning. Lecture Notes In Computer Science 2024, 15012: 465-475. DOI: 10.1007/978-3-031-72390-2_44.Peer-Reviewed Original ResearchContrastive Language-Image Pre-trainingLanguage modelState-of-the-art performanceSelf-supervised representation learningContrastive learning methodFine-tuningProlonged training timeBERT encoderContrastive learningRepresentation learningClass labelsGPU resourcesTraining samplesTraining timeMammography datasetModel sizePre-trainingLearning methodsEfficient frameworkVisual modelRichness of informationDatasetClinical diagnostic dataLearningMedical applicationsMine yOur owN Anatomy: Revisiting Medical Image Segmentation With Extremely Limited Labels
You C, Dai W, Liu F, Min Y, Dvornek N, Li X, Clifton D, Staib L, Duncan J. Mine yOur owN Anatomy: Revisiting Medical Image Segmentation With Extremely Limited Labels. IEEE Transactions On Pattern Analysis And Machine Intelligence 2024, 46: 11136-11151. PMID: 39269798, DOI: 10.1109/tpami.2024.3461321.Peer-Reviewed Original ResearchMedical image segmentationImage segmentationMedical image segmentation frameworkContext of medical image segmentationLong-tailed class distributionStrong data augmentationsIntra-class variationsSemi-supervised settingData imbalance issueImage segmentation frameworkMedical image analysisMedical image dataSupervision signalsContrastive learningBenchmark datasetsUnsupervised mannerLabel setsData augmentationSegmentation frameworkDomain expertisePseudo-codeImbalance issueModel trainingMedical imagesSegmentation model