Command Palette
Search for a command to run...
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Pengcheng Yin Graham Neubig Wen-tau Yih Sebastian Riedel

Abstract
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at http://fburl.com/TaBERT .
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semantic-parsing-on-wikitablequestions | MAPO + TABERTLarge (K = 3) | Accuracy (Dev): 52.2 Accuracy (Test): 51.8 |
| text-to-sql-on-spider | MAPO + TABERTLarge (K = 3) | Exact Match Accuracy (Dev): 64.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.