MiniMax M2.5
PulseAugur coverage of MiniMax M2.5 — every cluster mentioning MiniMax M2.5 across labs, papers, and developer communities, ranked by signal.
-
LLM benchmarking issues fixed by adjusting 'thinking mode' parameters
A developer encountered issues benchmarking three large language models, Kimi K2.5, MiniMax M2.5, and Gemma 4, initially deeming them broken due to low scores or errors. The root cause was identified as a default "think…
-
Low-cost AI model beats top performers on coding benchmark with new context engine
A new method called Xanther Context Engine (XCE) has enabled the MiniMax M2.5 model to achieve a 78.2% score on the SWE-bench Verified benchmark, outperforming all other models. This achievement is notable because MiniM…
-
AI models: Choose benchmarks over hype for true performance
A recent analysis highlights that tech companies often select AI models based on hype rather than performance on relevant benchmarks. The article emphasizes that benchmarks like SWE-bench for coding, Terminal-Bench for …
-
Hugging Face blog posts cover Intel CPU VLM, MiniMax M2 agents, and Gradio custom frontends
This cluster highlights three distinct technical blog posts from Hugging Face, shared via Mastodon. The first post details how to run Vision-Language Models (VLMs) on Intel CPUs using OpenVINO. The second explores agent…
-
IonRouter launches AI inference service with custom IonAttention engine
IonRouter has launched a new inference service designed for high throughput and low cost, utilizing its proprietary IonAttention engine. This engine is capable of multiplexing multiple models on a single GPU, enabling r…
-
Chinese AI Labs Release Frontier Models Qwen 3.5, GLM 5, and MiniMax 2.5
Several Chinese AI labs have released new flagship open-weight models, including Qwen 3.5, GLM 5, and MiniMax 2.5. These releases represent a significant push in the frontier of AI development from these organizations. …