Command Palette
Search for a command to run...
SPECTER: Document-level Representation Learning using Citation-informed Transformers
Arman Cohan Sergey Feldman Iz Beltagy Doug Downey Daniel S. Weld

Abstract
Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, the embeddings power strong performance on end tasks. We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that SPECTER outperforms a variety of competitive baselines on the benchmark.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| document-classification-on-scidocs-mag | SPECTER | F1 (micro): 82.0 |
| document-classification-on-scidocs-mesh | SPECTER | F1 (micro): 86.4 |
| representation-learning-on-scidocs | SPECTER | Avg.: 80.0 |
| representation-learning-on-scidocs | SciBERT | Avg.: 59.6 |
| representation-learning-on-scidocs | Citeomatic | Avg.: 76.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.