HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Pengcheng Yin Graham Neubig Wen-tau Yih Sebastian Riedel

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Abstract

Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at http://fburl.com/TaBERT .

Code Repositories

facebookresearch/tabert
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
semantic-parsing-on-wikitablequestionsMAPO + TABERTLarge (K = 3)
Accuracy (Dev): 52.2
Accuracy (Test): 51.8
text-to-sql-on-spiderMAPO + TABERTLarge (K = 3)
Exact Match Accuracy (Dev): 64.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data | Papers | HyperAI