Se-In Jang
Postdoctoral AssociateAbout
Research
Publications
2024
Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising
Jang S, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE Transactions On Medical Imaging 2024, 43: 2036-2049. PMID: 37995174, PMCID: PMC11111593, DOI: 10.1109/tmi.2023.3336237.Peer-Reviewed Original ResearchConceptsMulti-head self-attentionConvolutional neural networkSelf-attentionSignal-to-noise ratioState-of-the-art deep learning architecturesGlobal self-attentionState-of-the-artLocal feature extractionDeep learning architectureLow signal-to-noise ratioImage denoisingChannel informationChannel-wiseLearning architectureFeature extractionNeural networkTransformation frameworkComputational costReceptive fieldsImage qualityQuantitative meritDenoisingFrameworkQuantitative resultsDataset
2023
Disentangled Representations in Local-Global Contexts for Arabic Dialect Identification
Alhakeem Z, Jang S, Kang H. Disentangled Representations in Local-Global Contexts for Arabic Dialect Identification. IEEE/ACM Transactions On Audio Speech And Language Processing 2023, 32: 879-890. DOI: 10.1109/taslp.2023.3341006.Peer-Reviewed Original ResearchArabic dialect identificationDisentanglement networkLow-resolution feature mapsIntra-class variationsInter-class variationsDialect identificationEnhanced identification performanceDisentangled representationsFeature representationFeature mapsBottleneck TransformerLatent spaceLocal-global contextClustering lossLocal informationSource utteranceBackbone moduleIdentification performanceIrrelevant informationCompetitive solutionNetworkLocal-globalRepresentationQuantitative evaluationDisentanglementTauPETGen: Text-Conditional Tau PET Image Synthesis Based on Latent Diffusion Models
Jang S, Gomez C, Thibault E, Becker J, Dong Y, Normandin M, Price J, Johnson K, Fakhri G, Gong K. TauPETGen: Text-Conditional Tau PET Image Synthesis Based on Latent Diffusion Models. 2023, 00: 1-1. DOI: 10.1109/nssmicrtsd49126.2023.10338710.Peer-Reviewed Original ResearchAttenuation correction for PET imaging using conditional denoising diffusion probabilistic model
Dong Y, Jang S, Han P, Johnson K, Ma C, Fakhri G, Li Q, Gong K. Attenuation correction for PET imaging using conditional denoising diffusion probabilistic model. 2023, 00: 1-1. DOI: 10.1109/nssmicrtsd49126.2023.10338188.Peer-Reviewed Original ResearchDiffusion probabilistic modelGenerative adversarial networkConditional encodingAttenuation correctionDenoising diffusion probabilistic modelLow-level featuresProbabilistic modelAttenuation coefficientAdversarial networkExtract featuresPET/MR systemsEncodingPET acquisitionNovel methodDiffusion encodingMagnetic resonanceImagesPET imagingCorrectionMR imagingUNetAttenuationNetworkFeaturesResonanceSwinCross: Cross‐modal Swin transformer for head‐and‐neck tumor segmentation in PET/CT images
Li G, Chen J, Jang S, Gong K, Li Q. SwinCross: Cross‐modal Swin transformer for head‐and‐neck tumor segmentation in PET/CT images. Medical Physics 2023, 51: 2096-2107. PMID: 37776263, PMCID: PMC10939987, DOI: 10.1002/mp.16703.Peer-Reviewed Original ResearchConceptsCross-modal attentionDeep convolutional neural networkSemantic segmentation taskSwin TransformerFeature representationSegmentation taskAttention moduleTumor segmentationState-of-the-art transformer-based modelsMultiple resolutionsSegmentation modelModel long-range dependenciesMulti-modal feature representationCross-modal attention moduleMulti-modal input dataSelf-attention mechanismState-of-the-artTransformer-based modelsNovel segmentation modelConvolutional neural networkVision Transformer modelMulti-modal dataInter-modal correlationHead and neckTransformation model
2022
Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention
Jang S, Lois C, Becker J, Thibault E, Li Y, Price J, Fakhri G, Li Q, Johnson K, Gong K. Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention. 2022, 00: 1-3. DOI: 10.1109/nss/mic44845.2022.10399169.Peer-Reviewed Original ResearchConvolutional neural networkSelf-attention mechanismSelf-attentionTransformer architectureComputer vision tasksLocal feature extractionLong-range informationVision tasksDenoising performanceSwin TransformerFeature extractionImage datasetsUNet structureNeural networkSwinComputational costReceptive fieldsImage qualityMap calculationNetwork structureArchitecturePET image qualityChannel dimensionsQuantitative evaluationDenoisingInvestigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation
Li Y, Chen J, Jang S, Gong K, Li Q. Investigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation. 2022, 00: 1-3. DOI: 10.1109/nss/mic44845.2022.10399293.Peer-Reviewed Original ResearchNetwork architectureTransformer-based network architectureModel long-range dependenciesTransformer-based networkNatural language processingU-Net architectureMedical imaging communitySuccess of transformationVision transformerComputer visionSegmentation networkComputational resourcesLanguage processingLong range dependenceTraining datasetTumor segmentationMedical tasksNnU-NetImaging communityArchitectureNetworkTaskVisionSegmentsDatasetA Noise-Level-Aware Framework for PET Image Denoising
Li Y, Cui J, Chen J, Zeng G, Wollenweber S, Jansen F, Jang S, Kim K, Gong K, Li Q. A Noise-Level-Aware Framework for PET Image Denoising. Lecture Notes In Computer Science 2022, 13587: 75-83. DOI: 10.1007/978-3-031-17247-2_8.Peer-Reviewed Original ResearchDeep convolutional neural networkPET image denoisingImage denoisingConvolutional neural networkDenoising frameworkDenoising operationBaseline methodsDenoising needsLocal noise levelBackbone networkPatient PET imagesNeural networkDenoisingNoise levelScanner sensitivityPET/CT systemNetworkPET imagingNoise-levelEmbeddingImage acquisition durationAcquisition durationAdministered activityImagesNoise