Command Palette
Search for a command to run...
Rudolf Kadlec; Martin Schmid; Ondrej Bajgar; Jan Kleindienst

Abstract
Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| open-domain-question-answering-on-searchqa | ASR | N-gram F1: 22.8 Unigram Acc: 41.3 |
| question-answering-on-childrens-book-test | AS reader (greedy) | Accuracy-CN: 67.5% Accuracy-NE: 71% |
| question-answering-on-childrens-book-test | AS reader (avg) | Accuracy-CN: 68.9% Accuracy-NE: 70.6% |
| question-answering-on-cnn-daily-mail | AS Reader (ensemble model) | CNN: 75.4 Daily Mail: 77.7 |
| question-answering-on-cnn-daily-mail | AS Reader (single model) | CNN: 69.5 Daily Mail: 73.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.