Langfuse
PulseAugur coverage of Langfuse — every cluster mentioning Langfuse across labs, papers, and developer communities, ranked by signal.
2 day(s) with sentiment data
-
AI Agents Risk Budget Overruns and Data Leaks Without Gateways
Running multiple AI agents without proper oversight can lead to significant financial and security risks. Common issues include infinite agent loops that drain budgets due to a lack of delegation depth limits and per-ag…
-
Glad Labs enhances MCP platform with improved error handling and testing
Glad Labs has significantly improved its MCP platform by addressing silent failures and enhancing observability. Key updates include fixing the voice bridge to fail loudly rather than silently, re-enabling previously sk…
-
Developers build LLM observability tools and audit existing setups to track costs and errors
A developer has created a zero-configuration Python tool called llm-lens to monitor API calls to OpenAI and Anthropic, tracking costs, latency, and errors without requiring SDK changes or account setup. The tool uses mo…
-
Langfuse guide covers MLOps concepts, code, and interview prep
This article provides a comprehensive guide to Langfuse, an open-source observability platform for LLM applications. It covers fundamental concepts, practical code examples, and preparation for interviews related to MLO…
-
AI startups Cekura and Hamming launch automated testing for voice agents
Cekura and Hamming have launched platforms designed to automate the testing and monitoring of AI voice and chat agents. These services address the challenge of manually verifying agent performance across numerous conver…
-
Skope launches outcome-based billing for AI software, shifting risk to vendors
Skope, a new billing system, has launched to support outcome-based pricing for software products, particularly targeting the burgeoning AI market. The platform allows companies to charge customers only when their softwa…