HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

The Variational Fair Autoencoder

Christos Louizos; Kevin Swersky; Yujia Li; Max Welling; Richard Zemel

The Variational Fair Autoencoder

Abstract

We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the "Maximum Mean Discrepancy" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.

Code Repositories

nctumllab/huang-ching-wei
Mentioned in GitHub
yevgeni-integrate-ai/vfae
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
sentiment-analysis-on-multi-domain-sentimentVFAE
Average: 78.36
Books: 73.40
DVD: 76.57
Electronics: 80.53
Kitchen: 82.93

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
The Variational Fair Autoencoder | Papers | HyperAI