Command Palette
Search for a command to run...
Xinping Zhao; Baotian Hu; Yan Zhong; Shouzheng Huang; Zihao Zheng; Meng Wang; Haofen Wang; Min Zhang

Abstract
Although prevailing supervised and self-supervised learning augmented sequential recommendation (SeRec) models have achieved improved performance with powerful neural network architectures, we argue that they still suffer from two limitations: (1) Preference Drift, where models trained on past data can hardly accommodate evolving user preference; and (2) Implicit Memory, where head patterns dominate parametric learning, making it harder to recall long tails. In this work, we explore retrieval augmentation in SeRec, to address these limitations. Specifically, we propose a Retrieval-Augmented Sequential Recommendation framework, named RaSeRec, the main idea of which is to maintain a dynamic memory bank to accommodate preference drifts and retrieve relevant memories to augment user modeling explicitly. It consists of two stages: (i) collaborative-based pre-training, which learns to recommend and retrieve; (ii) retrieval-augmented fine-tuning, which learns to leverage retrieved memories. Extensive experiments on three datasets fully demonstrate the superiority and effectiveness of RaSeRec. The implementation code is available at https://github.com/HITsz-TMG/RaSeRec.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| sequential-recommendation-on-amazon-beauty | RaSeRec | HR@10: 0.086 HR@5: 0.0569 NDCG@5: 0.0369 nDCG@10: 0.0463 |
| sequential-recommendation-on-amazon-sports | RaSeRec | HR@10: 0.0497 HR@5: 0.0331 NDCG@10: 0.0264 NDCG@5: 0.0211 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.