2025
Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT.
Tran A, Karam G, Zeevi D, Qureshi A, Malhotra A, Majidi S, Murthy S, Park S, Kontos D, Falcone G, Sheth K, Payabvash S. Improving the Robustness of Deep-Learning Models in Predicting Hematoma Expansion from Admission Head CT. American Journal Of Neuroradiology 2025, ajnr.a8650. PMID: 39794133, DOI: 10.3174/ajnr.a8650.Peer-Reviewed Original ResearchFast Gradient Sign MethodDeep learning modelsRobustness of deep learning modelsAdversarial attacksAdversarial imagesAdversarial trainingSign MethodModel robustnessDeploying deep learning modelsDeep learning model performanceConvolutional neural networkImprove model robustnessAcute intracerebral hemorrhageHematoma expansionMulti-threshold segmentationReceiver operating characteristicIntracerebral hemorrhageGradient descentType attacksData perturbationNeural networkProjected GradientTraining setAntihypertensive Treatment of Acute Cerebral HemorrhageThreshold-based segmentation
2024
Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation
Yamada Y, Zhang F, Kluger Y, Yildirim I. Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation. Entropy 2024, 26: 258. PMID: 38539769, PMCID: PMC10968904, DOI: 10.3390/e26030258.Peer-Reviewed Original ResearchAdversarial trainingPre-trainingAdversarial attacksAdversarial robustnessRobustness of image classifiersModel of human visionComputational model of human visionAdversarial examplesImage classifierWeight initializationDataset settingData augmentationBackground textureSpurious correlationsHuman visionModels of visionImprove robustnessDatasetRobustnessImage formationAttacksComputational modelTrainingShapeNetAdversary
2023
Gradient-Based Enhancement Attacks in Biomedical Machine Learning
Rosenblatt M, Dadashkarimi J, Scheinost D. Gradient-Based Enhancement Attacks in Biomedical Machine Learning. Lecture Notes In Computer Science 2023, 14242: 301-312. DOI: 10.1007/978-3-031-45249-9_29.Peer-Reviewed Original ResearchBiomedical machinePrediction performanceEnhancement frameworkData provenance trackingPrevalence of machineEnhanced dataHigh feature similaritySimple neural networkProvenance trackingAdversarial attacksNeural networkData manipulationFeature similarityMedical imagingMachineEnhanced datasetOriginal datasetClassifierAttacksDatasetTrustworthinessModel performancePrevious workFrameworkPerformance
2022
Robust convolutional neural networks against adversarial attacks on medical images
Shi X, Peng Y, Chen Q, Keenan T, Thavikulwat A, Lee S, Tang Y, Chew E, Summers R, Lu Z. Robust convolutional neural networks against adversarial attacks on medical images. Pattern Recognition 2022, 132: 108923. DOI: 10.1016/j.patcog.2022.108923.Peer-Reviewed Original ResearchConvolutional neural networkMedical imagesAdversarial attacksAdversarial perturbationsNeural networkRobust convolutional neural networkNovel defense methodMedical image modalitiesReal-world scenariosSignificant security risksDefense methodsFeature representationSecurity risksHuman expertsNoisy featuresAttacking methodImage modalitiesAttacksImagesNetworkMedical applicationsOriginal performanceSparsityPerformance deteriorationApplications
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply