PulseAugur
LIVE 07:28:49
tool · [1 source] ·
0
tool

New theory explains why Zeroth-Order adaptation reduces model forgetting

Researchers have developed a new theoretical framework, Randomized Shaping Theory, to explain why Zeroth-Order (ZO) adaptation methods in continual learning may lead to less forgetting than first-order (FO) methods. The theory suggests that ZO adaptation, when properly analyzed, can preserve more previously acquired knowledge by selectively contracting anisotropic components of adaptation. This theoretical insight has led to a new algorithm called RISE, which applies calibrated ZO shaping to exact FO gradients within parameter blocks to improve the stability-plasticity tradeoff in continual learning. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Introduces a theoretical explanation for improved continual learning, potentially leading to more robust AI systems that retain knowledge over time.

RANK_REASON The cluster contains a new academic paper detailing a theoretical framework and a proposed algorithm for continual learning. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.LG →

COVERAGE [1]

  1. arXiv cs.LG TIER_1 · Zhongxiang Dai ·

    Why Zeroth-Order Adaptation May Forget Less: A Randomized Shaping Theory

    Continual learning requires new-task adaptation without damaging previously acquired capabilities. Recent forward-pass and zeroth-order (ZO) results show that low-query adaptation may retain better than first-order (FO) descent, but the usual view of ZO as noisy FO estimation doe…