HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification

Benjamin Clavié Marc Alphonsus

The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification

Abstract

We aim to highlight an interesting trend to contribute to the ongoing debate around advances within legal Natural Language Processing. Recently, the focus for most legal text classification tasks has shifted towards large pre-trained deep learning models such as BERT. In this paper, we show that a more traditional approach based on Support Vector Machine classifiers reaches surprisingly competitive performance with BERT-based models on the classification tasks in the LexGLUE benchmark. We also highlight that error reduction obtained by using specialised BERT-based models over baselines is noticeably smaller in the legal domain when compared to general language tasks. We present and discuss three hypotheses as potential explanations for these results to support future discussions.

Benchmarks

BenchmarkMethodologyMetrics
natural-language-understanding-on-lexglueOptimised SVM Baseline
ECtHR Task A: 66.3 / 55.0
ECtHR Task B: 76.0 / 65.4
EUR-LEX: 65.7 / 49.0
LEDGAR: 88.0 / 82.6
SCOTUS: 74.4 / 64.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification | Papers | HyperAI