Top Information Retrieval Papers of the Week

Top Information Retrieval Papers of the Week

Why Embedding Models Cannot Scale to All Retrieval Tasks, A Comprehensive Analysis of LLM-based Reranking Methods, and More!

Vol.119 for Aug 25 - Aug 31, 2025

Sumit's avatar
Sumit
Aug 29, 2025
∙ Paid
2
2
Share

Stay Ahead of the Curve with the Latest Advancements and Discoveries in Information Retrieval.

This week’s newsletter highlights the following research:

  1. Theoretical Limits of Single-Vector Embedding Models in Information Retrieval, from Google DeepMind

  2. Investigating Why Randomly Truncating Text Embeddings Barely Hurts Performance, from Takeshita et al.

  3. Vector Quantization Attention for Ultra-Long User Behavior Modeling in Recommender Systems, from Kuaishou

  4. Conditional Two-Tower Models for Bootstrapping User-to-Item Retrieval Systems, from Pinterest

  5. Computational Scaling Laws for Zero-Shot Information Retrieval with Decoder Models, from Databricks

  6. A Comprehensive Analysis of LLM-based Reranking Methods, from the University of Innsbruck

  7. Lazy Decoder-Only Architecture for Industrial-Scale Generative Recommendation, from Kuaishou

  8. Dynamic Multi-Task Learning for Scalable Recommendation Systems, from Kuaishou

  9. Enabling Compact Language Models for Agentic RAG Through Distillation-Guided Reinforcement Learning, from Kotoge et al.

  10. Combining ID and Content Embeddings Without Architectural Complexity, from Albatross AI

Keep reading with a 7-day free trial

Subscribe to Top Information Retrieval Papers of the Week to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Sumit Kumar
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture