HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Boxin Wang Chejian Xu Shuohang Wang Zhe Gan Yu Cheng Jianfeng Gao Ahmed Hassan Awadallah Bo Li

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Abstract

Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human performance. However, recent studies reveal that the robustness of these models can be challenged by carefully crafted textual adversarial examples. While several individual datasets have been proposed to evaluate model robustness, a principled and comprehensive benchmark is still missing. In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. In particular, we systematically apply 14 textual adversarial attack methods to GLUE tasks to construct AdvGLUE, which is further validated by humans for reliable annotations. Our findings are summarized as follows. (i) Most existing adversarial attack algorithms are prone to generating invalid or ambiguous adversarial examples, with around 90% of them either changing the original semantic meanings or misleading human annotators as well. Therefore, we perform a careful filtering process to curate a high-quality benchmark. (ii) All the language models and robust training methods we tested perform poorly on AdvGLUE, with scores lagging far behind the benign accuracy. We hope our work will motivate the development of new adversarial attacks that are more stealthy and semantic-preserving, as well as new robust language models against sophisticated adversarial attacks. AdvGLUE is available at https://adversarialglue.github.io.

Code Repositories

ai-secure/adversarial-glue
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
adversarial-robustness-on-advglueDeBERTa (single model)
Accuracy: 0.6086
adversarial-robustness-on-advglueELECTRA (single model)
Accuracy: 0.4169
adversarial-robustness-on-advglueT5 (single model)
Accuracy: 0.5682
adversarial-robustness-on-advglueSMART_RoBERTa (single model)
Accuracy: 0.5371
adversarial-robustness-on-advglueFreeLB (single model)
Accuracy: 0.5048
adversarial-robustness-on-advglueInfoBERT (single model)
Accuracy: 0.4603
adversarial-robustness-on-advglueALBERT (single model)
Accuracy: 0.5922
adversarial-robustness-on-advglueBERT (single model)
Accuracy: 0.3369
adversarial-robustness-on-advglueRoBERTa (single model)
Accuracy: 0.5021
adversarial-robustness-on-advglueSMART_BERT (single model)
Accuracy: 0.3029

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models | Papers | HyperAI