HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference

Yi Tay; Luu Anh Tuan; Siu Cheung Hui

Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference

Abstract

This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a $\approx 3$ times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.

Benchmarks

BenchmarkMethodologyMetrics
natural-language-inference-on-scitailCAFE
Accuracy: 83.3
natural-language-inference-on-snli300D CAFE (no cross-sentence attention)
% Test Accuracy: 85.9
% Train Accuracy: 87.3
Parameters: 3.7m
natural-language-inference-on-snli300D CAFE
% Test Accuracy: 88.5
% Train Accuracy: 89.8
Parameters: 4.7m
natural-language-inference-on-snli300D CAFE Ensemble
% Test Accuracy: 89.3
% Train Accuracy: 92.5
Parameters: 17.5m

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference | Papers | HyperAI