Command Palette
Search for a command to run...
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
Mohammad Taher Pilehvar; Jose Camacho-Collados

Abstract
By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, we put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| word-sense-disambiguation-on-words-in-context | Sentence LSTM | Accuracy: 53.1 |
| word-sense-disambiguation-on-words-in-context | DeConf | Accuracy: 58.7 |
| word-sense-disambiguation-on-words-in-context | ElMo | Accuracy: 57.7 |
| word-sense-disambiguation-on-words-in-context | SW2V | Accuracy: 58.1 |
| word-sense-disambiguation-on-words-in-context | Context2vec | Accuracy: 59.3 |
| word-sense-disambiguation-on-words-in-context | BERT-large 340M | Accuracy: 65.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.