HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

XAI for Transformers: Better Explanations through Conservative Propagation

Ameen Ali Thomas Schnake Oliver Eberle Grégoire Montavon Klaus-Robert Müller Lior Wolf

XAI for Transformers: Better Explanations through Conservative Propagation

Abstract

Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-newsqaxAI/grok-2-1212
EM: 70.57
F1: 88.24

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
XAI for Transformers: Better Explanations through Conservative Propagation | Papers | HyperAI