A new PyTorch optimizer named Rose has been released under the Apache 2.0 license. Developed by Matthew K., Rose is designed to be stateless, offering significantly lower VRAM usage compared to optimizers like AdamW, with memory overhead comparable to plain SGD. Early benchmarks suggest it achieves fast convergence and excellent generalization, even outperforming AdamW on certain tasks and demonstrating competitive results on OpenAI's parameter-golf challenge. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Offers a low-VRAM alternative for model training, potentially enabling larger models on consumer hardware.
RANK_REASON Release of a new open-source optimizer with benchmark results and code.