PulseAugur / Pulse
LIVE 08:29:13

Pulse

last 48h
[2/2] 89 sources

What AI is actually talking about — clusters surfacing on Bluesky, Reddit, HN, Mastodon and Lobsters, re-ranked to elevate originality and crush noise.

  1. I was trying to build persistent memory but ended up with this!

    A developer created a tool called GrapeRoot to optimize how LLMs like Anthropic's Claude Code interact with large codebases. The tool addresses the high cost and inefficiency of repeatedly re-reading code by using a knowledge graph approach for pre-injection, rather than standard context engineering. Benchmarks indicate GrapeRoot offers improved quality and significantly lower costs, with savings of 40-60% on certain tasks compared to vanilla Claude Code. AI

    IMPACT Optimizes LLM interaction with codebases, potentially reducing costs for developers working with large code repositories.

  2. Prompt-caching – auto-injects Anthropic cache breakpoints (90% token savings)

    A new plugin called prompt-caching has been released that significantly reduces token costs when using Anthropic's Claude models, particularly for developers. The plugin automatically identifies and caches stable content like system prompts and file reads, lowering costs by up to 90% on repeated interactions. While Anthropic has introduced its own auto-caching feature, prompt-caching offers enhanced observability and can be applied to custom applications built with the Anthropic SDK, addressing a different layer of cost optimization. AI

    IMPACT Developers can significantly reduce their Claude API costs by using this plugin for applications and agents.