Researchers have introduced Recycling Search Experience (RSE), a novel method to improve the efficiency of test-time scaling for large language models. RSE transforms test-time search from isolated trials into a cumulative process by distilling raw trajectories into an experience bank. This allows for the positive recycling of intermediate conclusions and the negative recycling of failure patterns, thereby reducing redundant derivations and pruning dead ends. Experiments on benchmarks like HMMT24 and IMO-Bench demonstrate that RSE significantly outperforms existing baselines under similar computational budgets. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Introduces a method to reduce computational redundancy in LLM inference, potentially lowering costs and increasing accessibility for complex reasoning tasks.
RANK_REASON This is a research paper detailing a new method for improving LLM efficiency. [lever_c_demoted from research: ic=1 ai=1.0]