Command Palette
Search for a command to run...
Moin Nadeem Anna Bethke Siva Reddy

Abstract
A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases. In order to assess the adverse effects of these models, it is important to quantify the bias captured in them. Existing literature on quantifying bias evaluates pretrained language models on a small set of artificially constructed bias-assessing sentences. We present StereoSet, a large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and XLNet on our dataset and show that these models exhibit strong stereotypical biases. We also present a leaderboard with a hidden test set to track the bias of future language models at https://stereoset.mit.edu
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| bias-detection-on-stereoset-1 | GPT-2 (medium) | ICAT Score: 71.73 |
| bias-detection-on-stereoset-1 | BERT (large) | ICAT Score: 69.89 |
| bias-detection-on-stereoset-1 | GPT-2 (large) | ICAT Score: 70.54 |
| bias-detection-on-stereoset-1 | XLNet (large) | ICAT Score: 72.03 |
| bias-detection-on-stereoset-1 | XLNet (base) | ICAT Score: 62.10 |
| bias-detection-on-stereoset-1 | BERT (base) | ICAT Score: 71.21 |
| bias-detection-on-stereoset-1 | RoBERTa (base) | ICAT Score: 67.50 |
| bias-detection-on-stereoset-1 | GPT-2 (small) | ICAT Score: 72.97 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.