Reducing Memory Footprint in Vector Search, A Systematic Evaluation of 35 Adaptive Retrieval Methods, and More!
Vol.88 for Jan 20 - Jan 26, 2025
Stay Ahead of the Curve with the Latest Advancements and Discoveries in Information Retrieval.
This week’s newsletter highlights the following research:
A Lossless Compression Framework for Vector Database Metadata, from Meta
Evaluating the Reproducibility and Effectiveness of Intent-Aware Recommender Systems, from Shehzad et al.
Comparing Simple and Complex Approaches to Adaptive Retrieval in Large Language Models, from Moskvoretskii et al.
An LLM-based Agent for Autonomous Academic Literature Discovery, from ByteDance
Compact, Adaptive, and Fast Embedding Compression for Large-Scale DLRMs, from Peking University
A Monte Carlo Tree Search Framework for Retrieval Augmented Generation, from Alibaba
4-bit Quantization for Memory-Efficient Vector Search in RAG, from Jeong et al.
A Comparative Study of LLMs and Traditional Methods in E-commerce Query Auto-completion, from Amazon
Enhancing Retrieval-Augmented Generation with Query-Aware Knowledge Graph Integration, from USTC
LLM-Powered Agents for Efficient Exploration of Large-Scale Enterprise Knowledge Graphs, from Baidu
Keep reading with a 7-day free trial
Subscribe to Top Information Retrieval Papers of the Week to keep reading this post and get 7 days of free access to the full post archives.