Top Information Retrieval Papers of the Week

Top Information Retrieval Papers of the Week

Why Embedding Models Cannot Scale to All Retrieval Tasks, A Comprehensive Analysis of LLM-based Reranking Methods, and More!

Vol.119 for Aug 25 - Aug 31, 2025

Sumit's avatar
Sumit
Aug 29, 2025
∙ Paid

Stay Ahead of the Curve with the Latest Advancements and Discoveries in Information Retrieval.

This week’s newsletter highlights the following research:

  1. Theoretical Limits of Single-Vector Embedding Models in Information Retrieval, from Google DeepMind

  2. Investigating Why Randomly Truncating Text Embeddings Barely Hurts Performance, from Takeshita et al.

  3. Vector Quantization Attention for Ultra-Long User Behavior Modeling in Recommender Systems, from Kuaishou

  4. Conditional Two-Tower Models for Bootstrapping User-to-Item Retrieval Systems, from Pinterest

  5. Computational Scaling Laws for Zero-Shot Information Retrieval with Decoder Models, from Databricks

  6. A Comprehensive Analysis of LLM-based Reranking Methods, from the University of Innsbruck

  7. Lazy Decoder-Only Architecture for Industrial-Scale Generative Recommendation, from Kuaishou

  8. Dynamic Multi-Task Learning for Scalable Recommendation Systems, from Kuaishou

  9. Enabling Compact Language Models for Agentic RAG Through Distillation-Guided Reinforcement Learning, from Kotoge et al.

  10. Combining ID and Content Embeddings Without Architectural Complexity, from Albatross AI

User's avatar

Continue reading this post for free, courtesy of Sumit.

Or purchase a paid subscription.
© 2026 Sumit Kumar · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture