Developers are reporting a significant decline in the performance of Anthropic's Claude Opus 4.7, particularly for coding tasks, with many switching back to the previous version, Opus 4.6. Users cite issues such as the model arguing with corrections, getting stuck in reasoning loops, and a drastic drop in long-context retrieval capabilities, despite Anthropic's published benchmarks showing improvements. These regressions are attributed to a new tokenizer that increases token costs for English, a breaking change in the "budget_tokens" parameter, and likely model quantization to handle increased demand. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Potential decrease in developer trust and adoption of Claude Opus 4.7 due to reported performance regressions, impacting workflows.
RANK_REASON This cluster discusses user-reported regressions in a recently released model, contrasting with official benchmarks, and speculates on the causes.