2025
Correction to “A Hybrid Transformer Architecture with a Quantized Self-Attention Mechanism Applied to Molecular Generation”
Smaldone A, Shee Y, Kyro G, Farag M, Chandani Z, Kyoseva E, Batista V. Correction to “A Hybrid Transformer Architecture with a Quantized Self-Attention Mechanism Applied to Molecular Generation”. Journal Of Chemical Theory And Computation 2025, 21: 7726-7726. PMID: 40744647, DOI: 10.1021/acs.jctc.5c01204.Peer-Reviewed Original ResearchMolecular generationTransformer architectureA Hybrid Transformer Architecture with a Quantized Self-Attention Mechanism Applied to Molecular Generation
Smaldone A, Shee Y, Kyro G, Farag M, Chandani Z, Kyoseva E, Batista V. A Hybrid Transformer Architecture with a Quantized Self-Attention Mechanism Applied to Molecular Generation. Journal Of Chemical Theory And Computation 2025, 21: 5143-5154. PMID: 40333363, DOI: 10.1021/acs.jctc.5c00331.Peer-Reviewed Original ResearchNatural language processingSelf-attentionHybrid transformer architectureSelf-attention matrixSelf-attention mechanismPairs of tokensMachine learning modelsTransformer decoderTransformer architectureComputational overheadLanguage modelCondition generatorSMILES stringsTime complexityLanguage processingQM9 datasetLearning modelsDot productInput sequenceMolecular generationAttention scoresArchitectureTheoretical analysisSequence lengthNVIDIA
2024
Kernel-elastic autoencoder for molecular design
Li H, Shee Y, Allen B, Maschietto F, Morgunov A, Batista V. Kernel-elastic autoencoder for molecular design. PNAS Nexus 2024, 3: pgae168. PMID: 38689710, PMCID: PMC11059255, DOI: 10.1093/pnasnexus/pgae168.Peer-Reviewed Original ResearchMaximum mean discrepancyMean discrepancyTransformer architectureCondition generatorWeighted reconstructionTraining datasetGenerative modelGeneration approachDocking applicationsMolecular designAutoencoderAccurate reconstructionVAESpectrum of applicationsAutoDock VinaEnhanced performanceDesignDatasetArchitectureGeneration performanceBenchmarksApplicationsGlide scoreReconstructionGeneration behavior
2023
Detection of Large-Droplet Macrovesicular Steatosis in Donor Livers Based on Segment-Anything Model
Tang H, Jiao J, Lin J, Zhang X, Sun N. Detection of Large-Droplet Macrovesicular Steatosis in Donor Livers Based on Segment-Anything Model. Laboratory Investigation 2023, 104: 100288. PMID: 37977550, DOI: 10.1016/j.labinv.2023.100288.Peer-Reviewed Original ResearchLiver diseaseMacrovesicular steatosisArtificial intelligence algorithmsArtificial intelligence modelsEnd-stage liver diseaseRule-based algorithmLiver histology analysisLiver transplant complicationsAcute liver failurePrimary hepatic malignancyLarge fat vacuolesIntelligence algorithmsDetection modelTransformer architectureIntelligence modelsTransplant complicationsLiver transplantationLiver failureLiver biopsyHepatic malignanciesFat vacuolesDonor organsEffective treatmentPrior knowledgeAlgorithm
2022
Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention
Jang S, Lois C, Becker J, Thibault E, Li Y, Price J, Fakhri G, Li Q, Johnson K, Gong K. Low-Dose Tau PET Imaging Based on Swin Restormer with Diagonally Scaled Self-Attention. 2022, 00: 1-3. DOI: 10.1109/nss/mic44845.2022.10399169.Peer-Reviewed Original ResearchConvolutional neural networkSelf-attention mechanismSelf-attentionTransformer architectureComputer vision tasksLocal feature extractionLong-range informationVision tasksDenoising performanceSwin TransformerFeature extractionImage datasetsUNet structureNeural networkSwinComputational costReceptive fieldsImage qualityMap calculationNetwork structureArchitecturePET image qualityChannel dimensionsQuantitative evaluationDenoisingDevelopment and Validation of a Model to Identify Critical Brain Injuries Using Natural Language Processing of Text Computed Tomography Reports
Torres-Lopez VM, Rovenolt GE, Olcese AJ, Garcia GE, Chacko SM, Robinson A, Gaiser E, Acosta J, Herman AL, Kuohn LR, Leary M, Soto AL, Zhang Q, Fatima S, Falcone GJ, Payabvash MS, Sharma R, Struck AF, Sheth KN, Westover MB, Kim JA. Development and Validation of a Model to Identify Critical Brain Injuries Using Natural Language Processing of Text Computed Tomography Reports. JAMA Network Open 2022, 5: e2227109. PMID: 35972739, PMCID: PMC9382443, DOI: 10.1001/jamanetworkopen.2022.27109.Peer-Reviewed Original ResearchConceptsNatural language processingF-scoreTest data setsLanguage processingIndependent test data setsData setsBidirectional Encoder RepresentationsAcute brain injuryLarge data setsHead CTBrain injuryNLP toolsF1 scoreNER modelTransformer architectureClinical textEncoder RepresentationsNLP algorithmNLP modelsCT reportsCustom dictionaryTraining setCross-validation performancePerformance metricsAvailable new tools
2021
Extracting Angina Symptoms from Clinical Notes Using Pre-Trained Transformer Architectures.
Eisman A, Shah N, Eickhoff C, Zerveas G, Chen E, Wu W, Sarkar I. Extracting Angina Symptoms from Clinical Notes Using Pre-Trained Transformer Architectures. AMIA Annual Symposium Proceedings 2021, 2020: 412-421. PMID: 33936414, PMCID: PMC8075440.Peer-Reviewed Original ResearchConceptsChest painAnginal symptomsPalliation of chest painSubsternal chest painShortness of breathPrimary care physicians' notes.Consecutive patientsCardiac testingCardiac riskAngina symptomsPainCardiovascular managementPhysician notesSymptomsClinical notesIllness sectionTransformer architectureSample size
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply