Measuring Retrieval Robustness in LLMs, Training Small LLM Agents for End-to-End RAG, and More!
Vol.106 for May 26 - Jun 01, 2025
Stay Ahead of the Curve with the Latest Advancements and Discoveries in Information Retrieval.
This week’s newsletter highlights the following research:
Distilling Agentic Behavior into Compact Language Models, from Kang et al.
Training Small LLM Agents for End-to-End Retrieval-Augmented Generation, from Viettel Group
Self-Evolving Search Agents for Complex Question Answering, from Alibaba
Measuring and Understanding Retrieval Robustness in LLMs, from Bloomberg
Unifying Online Ad Ranking with Single-Model Architecture, from Meituan
Optimizing Query Decomposition for Multi-Vector Retrieval via LLM-Based Prompt Engineering, from Liu et al.
A Unified Framework for Hard Negative Mining in Enterprise Knowledge Retrieval, from Oracle AI
A Theoretical Analysis of Locality and Entropy in Neural Ranking, from The University of Glasgow
A Multi-Dataset Analysis of Chunk Size Effects in Dense Retrieval Systems, from Fraunhofer IAIS
A Systematic Framework for Understanding LLM-Based Recommendation Approaches, from NTU
Keep reading with a 7-day free trial
Subscribe to Top Information Retrieval Papers of the Week to keep reading this post and get 7 days of free access to the full post archives.