Researchers have introduced CoREB, a new benchmark and model designed to improve code search beyond simple retrieval. CoREB addresses limitations in existing benchmarks, such as data contamination and noisy labels, by focusing on a full code search pipeline that includes reranking and developer-style queries. Experiments with various embedding models and rerankers showed that while code-specialized embeddings excel in code-to-code retrieval, no single model performed best across all tasks, and short keyword queries significantly degraded performance. The proposed CoREB-Reranker demonstrated consistent gains across all evaluated tasks, and the benchmark data and model have been released. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Enhances code search capabilities by providing a more comprehensive benchmark and a specialized reranking model.
RANK_REASON The cluster describes a new academic paper introducing a benchmark and model for code search. [lever_c_demoted from research: ic=1 ai=1.0]