Gemma 4-31B
PulseAugur coverage of Gemma 4-31B — every cluster mentioning Gemma 4-31B across labs, papers, and developer communities, ranked by signal.
-
Gemma-4-31B model hits 463K tokens/sec on TPU v6e-4 benchmarks
A performance report details the Gemma-4-31B model's capabilities on Cloud TPU v6e-4 hardware, achieving a peak prefill throughput of 463,345 tokens/sec. The benchmarks indicate that the dense 31B model offers comparabl…
-
Gemma 4 31B weights show cross-modal transfer via thin trainable interface
Researchers have demonstrated that frozen weights from the Gemma 4 31B text-pretrained model can be effectively reused across different modalities, including robotics and associative recall tasks. By employing a thin, t…
-
AI models achieve high verification success with formal code generation
Researchers have developed a new dataset, NL2VC-60, containing 60 algorithmic problems to aid in generating verified code from natural language. They evaluated seven open-weight LLMs using various prompting strategies, …