Command Palette
Search for a command to run...
Ashutosh Adhikari; Achyudh Ram; Raphael Tang; Jimmy Lin

Abstract
We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| document-classification-on-aapd | KD-LSTMreg | F1: 72.9 |
| document-classification-on-reuters-21578 | KD-LSTMreg | F1: 88.9 |
| document-classification-on-yelp-14 | KD-LSTMreg | Accuracy: 69.4 |
| text-classification-on-arxiv-10 | DocBERT | Accuracy: 0.764 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.