PulseAugur / Pulse
LIVE 00:49:57

Pulse

last 48h
[2/2] 89 sources

What AI is actually talking about — clusters surfacing on Bluesky, Reddit, HN, Mastodon and Lobsters, re-ranked to elevate originality and crush noise.

  1. FRONTIER RELEASE · Last Week in AI · · [4 sources] · BLOGREDDIT

    LWiAI Podcast #236 - GPT 5.4, Gemini 3.1 Flash Lite, Supply Chain Risk

    OpenAI has released GPT-5.4 Pro with a 1 million token context window and enhanced safety features, alongside GPT-5.3 Instant, which aims for a less preachy tone. Google has improved its Gemini 3.1 Flash Lite model for faster response times and lower costs, and introduced a CLI for agent integration with its productivity suite. Luma has launched unified multimodal models and agents for creative tasks, demonstrating a rapid ad localization use case. The cluster also touches on controversies surrounding AI in defense contracts, a lawsuit alleging Gemini's role in a suicide, and Anthropic's warning about labor disruption. AI

    LWiAI Podcast #236 - GPT 5.4, Gemini 3.1 Flash Lite, Supply Chain Risk

    IMPACT New model releases from OpenAI and Google push the boundaries of context window size and agent integration, potentially accelerating enterprise adoption and raising safety concerns.

  2. FRONTIER RELEASE · Practical AI · · [12 sources] · MASTOBLOG

    Cracking the code of failed AI pilots

    Anthropic has withheld its new Claude Mythos model from public release due to its advanced capabilities in finding and exploiting software vulnerabilities. The company is instead providing access to select cybersecurity firms through Project Glasswing to help patch critical software before the model's capabilities become more widely available. This decision highlights a shift from previous AI releases, where caution stemmed from unknown risks, to a current scenario where known, potent risks necessitate controlled access. AI

    Cracking the code of failed AI pilots

    IMPACT This controlled release strategy for a highly capable model could set a precedent for managing advanced AI risks, potentially influencing future AI development and deployment.