HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers

Jingze Zhu Yongliang Wu Wenbo Zhu Jiawang Cao Yanqiang Zheng Jiawei Chen Xu Yang Bernt Schiele Jonas Fischer Xinting Hu

LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers

Abstract

Large language models (LLMs) excel at natural language understanding and generation but remain vulnerable to factual errors, limiting their reliability in knowledge-intensive tasks. While decoding-time strategies provide a promising efficient solution without training, existing methods typically treat token-level and layer-level signals in isolation, overlooking the joint dynamics between them. In this work, we introduce a token-aware, layer-localized contrastive decoding method that aligns specific token types with their most influential transformer layers to improve factual generation. Through empirical attention analysis, we identify two key patterns: punctuation tokens receive dominant attention in early layers, while conceptual tokens govern semantic reasoning in intermediate layers. By selectively suppressing attention to these token types at their respective depths, we achieve the induction of controlled factual degradation and derive contrastive signals to guide the final factual decoding. Our method requires no additional training or model modification, and experiments demonstrate that our method consistently improves factuality across multiple LLMs and various benchmarks.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers | Papers | HyperAI