2025
GLAPAL-H: Global, Local, And Parts Aware Learner for Hydrocephalus Infection Diagnosis in Low-Field MRI
Mukherjee S, Templeton K, Tindimwebwa S, Lin P, Sutin J, Yu M, Peterson M, Truwit C, Schiff S, Monga V. GLAPAL-H: Global, Local, And Parts Aware Learner for Hydrocephalus Infection Diagnosis in Low-Field MRI. IEEE Transactions On Biomedical Engineering 2025, PP: 1-14. PMID: 40489263, PMCID: PMC12338915, DOI: 10.1109/tbme.2025.3578541.Peer-Reviewed Original ResearchFeature extractionState-of-the-art alternativesTraining loss functionMulti-task architectureGlobal feature extractionState-of-the-artLocal feature extractionClassification taskCNN branchesSegmentation masksShallow CNNHolistic featuresLocal featuresTraining imageryLoss functionTwo-classThree-classSegmental branchesArchitecture segmentationArchitectureCustomized approachCNNRegularizationQuality issuesPost-infectious hydrocephalus
2024
Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising
Jang S, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE Transactions On Medical Imaging 2024, 43: 2036-2049. PMID: 37995174, PMCID: PMC11111593, DOI: 10.1109/tmi.2023.3336237.Peer-Reviewed Original ResearchConceptsMulti-head self-attentionConvolutional neural networkSelf-attentionSignal-to-noise ratioState-of-the-art deep learning architecturesGlobal self-attentionState-of-the-artLocal feature extractionDeep learning architectureLow signal-to-noise ratioImage denoisingChannel informationChannel-wiseLearning architectureFeature extractionNeural networkTransformation frameworkComputational costReceptive fieldsImage qualityQuantitative meritDenoisingFrameworkQuantitative resultsDataset
2022
Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention
Jang S, Lois C, Becker J, Thibault E, Li Y, Price J, Fakhri G, Li Q, Johnson K, Gong K. Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention. 2022, 00: 1-3. DOI: 10.1109/nss/mic44845.2022.10399169.Peer-Reviewed Original ResearchConvolutional neural networkSelf-attention mechanismSelf-attentionTransformer architectureComputer vision tasksLocal feature extractionLong-range informationVision tasksDenoising performanceSwin TransformerFeature extractionImage datasetsUNet structureNeural networkSwinComputational costReceptive fieldsImage qualityMap calculationNetwork structureArchitecturePET image qualityChannel dimensionsQuantitative evaluationDenoising
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply