HyperAIHyperAI

Command Palette

Search for a command to run...

FRAGE: Frequency-Agnostic Word Representation

Chengyue Gong*1 cygong@pku.edu.cn Di He*2 di_he@pku.edu.cn Xu Tan3 xu.tan@microsoft.com Tao Qin3 taoqin@microsoft.com Liwei Wang2,4 wanglw@cis.pku.edu.cn Tie-Yan Liu3 tie-yan.liu@microsoft.com

Abstract

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In this paper, we develop a neat, simple yet effective way to learn \emph{FRequency-AGnostic word Embedding} (FRAGE) using adversarial training. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, language modeling, machine translation and text classification. Results show that with FRAGE, we achieve higher performance than the baselines in all tasks.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
FRAGE: Frequency-Agnostic Word Representation | Papers | HyperAI