Command Palette
Search for a command to run...
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter
Zhiyuan Liu; Sihang Li; Yanchen Luo; Hao Fei; Yixin Cao; Kenji Kawaguchi; Xiang Wang; Tat-Seng Chua

Abstract
Language Models (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception - a critical ability of human professionals in comprehending molecules' topological structures. To bridge this gap, we propose MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. MolCA enables an LM (e.g., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector. Specifically, the cross-modal projector is implemented as a Q-Former to connect a graph encoder's representation space and an LM's text space. Further, MolCA employs a uni-modal adapter (i.e., LoRA) for the LM's efficient adaptation to downstream tasks. Unlike previous studies that couple an LM with a graph encoder via cross-modal contrastive learning, MolCA retains the LM's ability of open-ended text generation and augments it with 2D graph information. To showcase its effectiveness, we extensively benchmark MolCA on tasks of molecule captioning, IUPAC name prediction, and molecule-text retrieval, on which MolCA significantly outperforms the baselines. Our codes and checkpoints can be found at https://github.com/acharkq/MolCA.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| molecule-captioning-on-chebi-20 | MolCA, Galac125M | BLEU-2: 61.6 BLEU-4: 52.9 METEOR: 63.9 ROUGE-1: 67.4 ROUGE-2: 53.3 ROUGE-L: 61.5 |
| molecule-captioning-on-chebi-20 | MolCA, Galac1.3B | BLEU-2: 62.0 BLEU-4: 53.1 METEOR: 65.1 ROUGE-1: 68.1 ROUGE-2: 53.7 ROUGE-L: 61.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.