2024
MolLM: a unified language model for integrating biomedical text with 2D and 3D molecular representations
Tang X, Tran A, Tan J, Gerstein M. MolLM: a unified language model for integrating biomedical text with 2D and 3D molecular representations. Bioinformatics 2024, 40: i357-i368. PMID: 38940177, PMCID: PMC11256921, DOI: 10.1093/bioinformatics/btae260.Peer-Reviewed Original ResearchConceptsTransformer encoderDownstream tasksLanguage modelBiomedical textSelf-supervised pre-trainingExplicit 3D representationRepresentation improves performanceDeep learning modelsRepresentation of moleculesContrastive learningSupervisory signalExtract embeddingsRepresentation capabilityJoint representationBiomedical domainPre-trainingTextual dataLearning modelsMolecular representationsModel weightsJupyter NotebookStep-by-step guidanceEncodingProperty predictionStructural information
2023
Transformer with convolution and graph-node co-embedding: An accurate and interpretable vision backbone for predicting gene expressions from local histopathological image
Xiao X, Kong Y, Li R, Wang Z, Lu H. Transformer with convolution and graph-node co-embedding: An accurate and interpretable vision backbone for predicting gene expressions from local histopathological image. Medical Image Analysis 2023, 91: 103040. PMID: 38007979, DOI: 10.1016/j.media.2023.103040.Peer-Reviewed Original ResearchConceptsHistopathological imagesVision backbonesGlobal featuresCombination of convolutional layersEncoding of local featuresGraph neural networksHistopathological image analysisGPU consumptionTransformer encoderConvolutional layersGraph-nodesLocal featuresNeural networkMinimal memoryCo-embeddingLow interpretabilityPersonal computerPathological imagesData modelSlide imagesHealth applicationsSuperior accuracyModel complexityInformation predictionHistological images
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply