PulseAugur
LIVE 09:13:17
tool · [1 source] ·
1
tool

New method searches data recipes for optimal AI model fine-tuning

Researchers have developed a new method for supervised fine-tuning (SFT) data selection, moving beyond simple instance ranking to a "data recipe search" approach. This technique uses a library of operators like filtering and deduplication to construct high-quality training subsets within a limited budget of full SFT evaluations. Their system, AutoSelection, decouples data materialization from expensive evaluations, achieving superior reasoning performance across multiple base models compared to existing methods. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Introduces a more efficient method for curating training data, potentially improving model performance with fewer resources.

RANK_REASON The cluster contains an academic paper detailing a new method for AI model fine-tuning. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Yongqi Zhang ·

    From Instance Selection to Fixed-Pool Data Recipe Search for Supervised Fine-Tuning

    Supervised fine-tuning (SFT) data selection is commonly formulated as instance ranking: score each example and retain a top-$k$ subset. However, effective SFT training subsets are often produced through ordered curation recipes, where filtering, mixing, and deduplication operator…