PulseAugur / Pulse
LIVE 01:43:57

Pulse

last 48h
[7/7] 89 sources

What AI is actually talking about — clusters surfacing on Bluesky, Reddit, HN, Mastodon and Lobsters, re-ranked to elevate originality and crush noise.

  1. FRONTIER RELEASE · The Decoder · · [15 sources] · MASTOBLOG

    Thinking Machines Lab ships its first model and argues interactivity is what OpenAI gets wrong about voice

    Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, has unveiled its first AI model, focusing on "interaction models" designed for real-time collaboration across voice, video, and text. Unlike current AI that processes input sequentially, TML's model operates in 200-millisecond chunks, allowing it to listen and respond simultaneously, mimicking natural human conversation. This "full duplex" approach aims to surpass competitors like OpenAI's GPT Realtime 2 and Google's Gemini Live in conversational quality, though it is currently a research preview with a limited release planned. AI

    Thinking Machines Lab ships its first model and argues interactivity is what OpenAI gets wrong about voice

    IMPACT Sets a new standard for real-time conversational AI, potentially shifting focus from agentic capabilities to natural human-AI interaction.

  2. FRONTIER RELEASE · OpenAI News · · [45 sources] · HNMASTOBLOG

    How OpenAI delivers low-latency voice AI at scale

    OpenAI has released three new real-time voice models: GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper. These models offer enhanced reasoning capabilities, live speech translation for over 70 languages, and low-latency transcription. GPT-Realtime-2, in particular, is described as having "GPT-5-class reasoning" and features a significantly expanded context window of 128K tokens, alongside improved handling of interruptions and tool usage. AI

    IMPACT Enhances real-time voice agent capabilities with improved reasoning, translation, and transcription, potentially accelerating adoption of voice-first interfaces.

  3. FRONTIER RELEASE · Smol AINews · · [15 sources] · MASTOBLOG

    not much happened today

    OpenAI has released GPT-5.5, which offers improvements in factuality, intelligence, and image understanding, and is now the default model for ChatGPT and its API. This release also enhances personalization, allowing ChatGPT to utilize user memories, past chats, and connected files. Additionally, OpenAI has introduced an Agents SDK for TypeScript and updated its Codex model to function as a general-purpose computer work agent, expanding its capabilities beyond coding. AI

    IMPACT GPT-5.5's release and enhanced personalization features are likely to accelerate user adoption of AI agents for a wider range of tasks beyond coding.

  4. FRONTIER RELEASE · Simon Willison · · [11 sources] · MASTOBLOG

    A pelican for GPT-5.5 via the semi-official Codex backdoor API

    OpenAI has released GPT-5.5, available in Codex and rolling out to paid ChatGPT subscribers, though its API access is pending further safety reviews. The new model is described as fast and capable, with early users noting its ability to accurately build requested items. Meanwhile, Simon Willison's LLM library has been updated to version 0.32a0, introducing a more flexible message-based input system and streaming parts for responses to better handle diverse model capabilities. Additionally, issues affecting Claude Code's performance have been identified as harness problems rather than model flaws, with a specific bug causing forgetfulness and repetition. AI

    A pelican for GPT-5.5 via the semi-official Codex backdoor API

    IMPACT GPT-5.5's release and API delay signals continued frontier model development and cautious rollout strategies.

  5. FRONTIER RELEASE · The Guardian — AI · · [25 sources] · MASTOBLOG

    Anthropic investigates report of rogue access to hack-enabling Mythos AI

    Anthropic has announced Claude Mythos Preview, an AI model capable of autonomously finding and weaponizing software vulnerabilities, raising significant cybersecurity concerns. Due to its potential for misuse, the model is not publicly released but is instead being provided to a select group of companies and partners through initiatives like Project Glasswing to help identify and patch flaws. This development has prompted discussions among international financial officials and government ministers about the escalating risks posed by advanced AI in cyber warfare and the need for proactive security measures. AI

    Anthropic investigates report of rogue access to hack-enabling Mythos AI

    IMPACT This model's ability to autonomously find and exploit vulnerabilities could significantly accelerate cyber-attacks, necessitating rapid adaptation of defense strategies.

  6. FRONTIER RELEASE · Last Week in AI · · [4 sources] · BLOGREDDIT

    LWiAI Podcast #236 - GPT 5.4, Gemini 3.1 Flash Lite, Supply Chain Risk

    OpenAI has released GPT-5.4 Pro with a 1 million token context window and enhanced safety features, alongside GPT-5.3 Instant, which aims for a less preachy tone. Google has improved its Gemini 3.1 Flash Lite model for faster response times and lower costs, and introduced a CLI for agent integration with its productivity suite. Luma has launched unified multimodal models and agents for creative tasks, demonstrating a rapid ad localization use case. The cluster also touches on controversies surrounding AI in defense contracts, a lawsuit alleging Gemini's role in a suicide, and Anthropic's warning about labor disruption. AI

    LWiAI Podcast #236 - GPT 5.4, Gemini 3.1 Flash Lite, Supply Chain Risk

    IMPACT New model releases from OpenAI and Google push the boundaries of context window size and agent integration, potentially accelerating enterprise adoption and raising safety concerns.

  7. FRONTIER RELEASE · Practical AI · · [12 sources] · MASTOBLOG

    Cracking the code of failed AI pilots

    Anthropic has withheld its new Claude Mythos model from public release due to its advanced capabilities in finding and exploiting software vulnerabilities. The company is instead providing access to select cybersecurity firms through Project Glasswing to help patch critical software before the model's capabilities become more widely available. This decision highlights a shift from previous AI releases, where caution stemmed from unknown risks, to a current scenario where known, potent risks necessitate controlled access. AI

    Cracking the code of failed AI pilots

    IMPACT This controlled release strategy for a highly capable model could set a precedent for managing advanced AI risks, potentially influencing future AI development and deployment.