Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations
You C, Min Y, Dai W, Sekhon J, Staib L, Duncan J. Calibrating Multi-modal Representations: A Pursuit of Group Robustness without Annotations. 2015 IEEE Conference On Computer Vision And Pattern Recognition (CVPR) 2024, 00: 26140-26150. PMID: 39640960, PMCID: PMC11620289, DOI: 10.1109/cvpr52733.2024.02470.Peer-Reviewed Original ResearchDiverse downstream tasksVision-language modelsPre-trained modelsRepresentation of samplesContrastive learningDownstream tasksFeature reweightingTraining dataFeature patternsModel generalizationGroup annotationsPain pointsGroup labelsAnnotationRobustnessClassifierClipsFeaturesDeepDeploymentBenchmarksTime-intensiveCodeTaskLearningMonte-Carlo Frequency Dropout for Predictive Uncertainty Estimation in Deep Learning
Zeevi T, Venkataraman R, Staib L, Onofrey J. Monte-Carlo Frequency Dropout for Predictive Uncertainty Estimation in Deep Learning. 2024, 00: 1-5. DOI: 10.1109/isbi56570.2024.10635511.Peer-Reviewed Original ResearchArtificial neural networkState-of-the-artMedical image dataPredictive uncertainty estimationBiomedical image dataImage dataOptimal artificial neural networkMC dropoutDropout approachSource-codeDrop-connectDeep learningNeural networkSignal spaceMonte-CarloPrediction uncertaintyUncertainty estimationDiverse setComprehensive comparisonPrediction scenariosDeepPosterior predictive distributionRepositoryDecision-makingNetwork