A Unified Approach to Job Search Query Understanding, Efficient Inference for Generative Recommenders, and More!
Vol.122 for Sep 15 - Sep 21, 2025
Stay Ahead of the Curve with the Latest Advancements and Discoveries in Information Retrieval.
This week’s newsletter highlights the following research:
Scalable Cross-Entropy Loss with Negative Sampling for Industrial Recommendation Systems, from Zhelnin et al.
Unified LLM Architecture for Large-Scale Job Search Query Understanding, from LinkedIn
What News Recommendation Research Doesn't Teach About Building Real Systems, from Higley et al.
A Systematic Evaluation of Large Language Models for Cross-Lingual Information Retrieval, from LMU Munich
Interactive Two-Tower Architecture for Real-Time Candidate Filtering in Recommender Systems, from Ant Group
Efficient Inference for Generative LLM Recommenders via Hidden State Matching, from Wang et al.
A Framework for Training Embedding Models, from Scratch, from Tencent
A Comprehensive Review of Large Language Models in Document Intelligence, from Ke et al.
A Modular Analysis of LLM-Based Feature Extraction for Sequential Recommendation, from Shi et al.
Systematic Data Augmentation for Enhanced Generative Recommendation, from Lee et al.
Keep reading with a 7-day free trial
Subscribe to Top Information Retrieval Papers of the Week to keep reading this post and get 7 days of free access to the full post archives.

