Gemma
PulseAugur coverage of Gemma — every cluster mentioning Gemma across labs, papers, and developer communities, ranked by signal.
5 day(s) with sentiment data
-
Google DeepMind unveils Decoupled DiLoCo for resilient AI model training
Google DeepMind has introduced Decoupled DiLoCo, a novel approach to training advanced AI models that enhances resilience and flexibility across data centers. This system can train models like Google's 12B Gemma model a…
-
CoreWeave enhances multi-cloud AI stack with Google Cloud interconnect and unified orchestration
CoreWeave has announced a suite of services aimed at simplifying multi-cloud AI infrastructure, including a direct interconnect with Google Cloud to reduce deployment times. The company also introduced SUNK Anywhere, a …
-
vLLM releases v0.19.1rc0 with Gemma 4 implementation updates
vLLM has released version 0.19.1rc0, which includes updates to its Gemma implementation. This release is part of ongoing development and feedback integration for the vLLM project.
-
Google DeepMind details 2025 AI breakthroughs with Gemini 3 and new models
Google DeepMind and Google Research have detailed significant AI advancements throughout 2025, highlighted by the release of their Gemini 3 and Gemini 3 Flash models. These models demonstrate state-of-the-art performanc…
-
Cactus launches open-source AI engine for mobile devices
Cactus has released an open-source AI engine designed for mobile devices and wearables, prioritizing low latency and reduced RAM usage. The engine supports multimodal capabilities, including speech, vision, and language…
-
Gemma 3n fully available in the open-source ecosystem!
Google DeepMind has fully released Gemma 3n, a mobile-first multimodal model designed for on-device applications. This new architecture supports image, audio, video, and text inputs, with text outputs, and is optimized …
-
BrowserAI enables local LLM execution with WebGPU acceleration
BrowserAI is an open-source project enabling large language models to run directly within a web browser using WebGPU for accelerated performance. This approach ensures 100% privacy as all processing occurs locally, elim…
-
Google releases Gemma 2 2B, ShieldGemma, and Gemma Scope
Google has announced updates to its Gemma family of models, including the release of Gemma 2 2B. This new iteration is designed for efficiency and accessibility, aiming to empower developers with powerful yet lightweigh…
-
Researchers unveil new methods to boost LLM inference speed and efficiency
Google Research has introduced "speculative cascades," a novel method to enhance Large Language Model (LLM) efficiency by merging speculative decoding with standard cascades. This hybrid approach aims to reduce computat…
-
Google's Hassabis visits YC; AI chip packaging and model competition discussed
Demis Hassabis of Google visited Y Combinator, expressing enthusiasm for startups utilizing Google's Gemma models. Meanwhile, SemiAnalysis discussed emerging trends in AI accelerator packaging, highlighting test consuma…