HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

MolFM: A Multimodal Molecular Foundation Model

Yizhen Luo; Kai Yang; Massimo Hong; Xing Yi Liu; Zaiqing Nie

MolFM: A Multimodal Molecular Foundation Model

Abstract

Molecular knowledge resides within three different modalities of information sources: molecular structures, biomedical documents, and knowledge bases. Effective incorporation of molecular knowledge from these modalities holds paramount significance in facilitating biomedical research. However, existing multimodal molecular foundation models exhibit limitations in capturing intricate connections between molecular structures and texts, and more importantly, none of them attempt to leverage a wealth of molecular expertise derived from knowledge graphs. In this study, we introduce MolFM, a multimodal molecular foundation model designed to facilitate joint representation learning from molecular structures, biomedical texts, and knowledge graphs. We propose cross-modal attention between atoms of molecular structures, neighbors of molecule entities and semantically related texts to facilitate cross-modal comprehension. We provide theoretical analysis that our cross-modal pre-training captures local and global molecular knowledge by minimizing the distance in the feature space between different modalities of the same molecule, as well as molecules sharing similar structures or functions. MolFM achieves state-of-the-art performance on various downstream tasks. On cross-modal retrieval, MolFM outperforms existing models with 12.13% and 5.04% absolute gains under the zero-shot and fine-tuning settings, respectively. Furthermore, qualitative analysis showcases MolFM's implicit ability to provide grounding from molecular substructures and knowledge graphs. Code and models are available on https://github.com/BioFM/OpenBioMed.

Code Repositories

pharmolix/openbiomed
pytorch
Mentioned in GitHub
biofm/openbiomed
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
molecule-captioning-on-chebi-20MolFM-Base
BLEU-2: 58.5
BLEU-4: 49.8
METEOR: 60.7
ROUGE-1: 65.3
ROUGE-2: 50.8
ROUGE-L: 59.4
Text2Mol: 57.6
molecule-captioning-on-chebi-20MolFM-Small
BLEU-2: 54.2
BLEU-4: 45.2
METEOR: 56.4
ROUGE-1: 62.3
ROUGE-2: 46.9
ROUGE-L: 56.2
Text2Mol: 55.7
text-based-de-novo-molecule-generation-onMolFM-Small
BLEU: 80.3
Exact Match: 16.9
Levenshtein: 20.868
MACCS FTS: 83.4
Morgan FTS: 72.1
Parameter Count: 13620000
RDK FTS: 66.2
Text2Mol: 57.3
Validity: 85.9
text-based-de-novo-molecule-generation-onMolFM-Base
BLEU: 82.2
Exact Match: 21.0
Levenshtein: 19.445
MACCS FTS: 85.4
Morgan FTS: 75.8
Parameter Count: 296200000
RDK FTS: 69.7
Text2Mol: 58.3
Validity: 89.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MolFM: A Multimodal Molecular Foundation Model | Papers | HyperAI