HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Learning Universal Adversarial Perturbations with Generative Models

Jamie Hayes; George Danezis

Learning Universal Adversarial Perturbations with Generative Models

Abstract

Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In this work, we introduce universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset. We show that this technique improves on known universal adversarial attacks.

Code Repositories

jhayes14/UAN
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
graph-classification-on-nci1DUGNN
Accuracy: 85.50%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learning Universal Adversarial Perturbations with Generative Models | Papers | HyperAI