PulseAugur / Pulse
LIVE 01:54:10

Pulse

last 48h
[5/5] 89 sources

What AI is actually talking about — clusters surfacing on Bluesky, Reddit, HN, Mastodon and Lobsters, re-ranked to elevate originality and crush noise.

  1. COMMENTARY · LessWrong (AI tag) · · BLOG

    Epistemic Immunodepression in the Age of AI

    A pediatric surgeon and researcher hypothesizes that artificial intelligence is eroding the self-correction mechanisms of science, a phenomenon they term "epistemic immunodepression." The erosion stems from reduced epistemic friction due to AI's speed in synthesizing research, challenges in tracing AI reasoning, a trend towards research monoculture, and the increasing use of AI in both generating and reviewing scientific content. Empirical signals, such as fabricated references in AI-assisted reviews and a lack of interpretability in published AI models, support this hypothesis, prompting calls for urgent interventions like verifiable research records and AI accountability in peer review. AI

    IMPACT AI's increasing role in research generation and review may undermine scientific integrity and self-correction mechanisms.

  2. COMMENTARY · Mastodon — mastodon.social · · [2 sources] · MASTO

    RT @rumgewieselt: 3× GTX 1080 Ti (2017, Pascal) + llama.cpp PR #22673 (MTP) mehr auf Arint.info # AI # GPU # llama # MachineLearning # OpenSource # Qwen # arint

    An engineer from Anthropic, who authored "Building Effective Agents," has shared a 14-minute presentation on the topic. Separately, a demonstration showcased the use of three 2017-era GTX 1080 Ti GPUs with llama.cpp's MTP feature to run Qwen models. AI

    IMPACT Insights into effective agent building and demonstrations of running models on older hardware offer practical value for AI developers.

  3. COMMENTARY · LessWrong (AI tag) · · BLOG

    Are LLMs persisting interlocutors?

    A recent paper by Jonathan Birch proposes a "Centrist Manifesto" for AI consciousness, highlighting two key issues: the potential for widespread misattribution of consciousness to AI due to a "persisting interlocutor illusion," and the possibility that genuine, albeit alien, forms of consciousness may exist within LLMs that current detection methods cannot confirm. The author of this article challenges Birch's assertion that LLMs cannot be persisting interlocutors, arguing against the "physical criterion" Birch uses to support his claim. This criterion suggests that identity requires continuous physical processes, which is not met by LLMs whose processing can occur across disparate data centers. AI

    IMPACT Explores the philosophical implications of LLM interactions, questioning whether users can form persistent relationships with AI and the criteria for AI consciousness.

  4. COMMENTARY · Mastodon — fosstodon.org · · [20 sources] · MASTO

    My 5 favorite open source operating systems that aren't Linux Looking for non-Linux open-source options? From ghosts of past operating systems to fascinating wo

    The article discusses the potential pitfalls of agentic AI in software development, highlighting risks in testing, security, and maintenance that could undermine projects. It suggests that developers need to adapt their strategies for managing and validating machine-generated code at scale. Additionally, the piece explores the evolution of user interfaces from static screens to dynamic, 'just-in-time' generated layers, emphasizing the need for preparation for these 'disposable' UIs. AI

    IMPACT Agentic AI presents new challenges in software development, requiring updated approaches to testing and security for machine-generated code.

  5. COMMENTARY · Mastodon — mastodon.social Polski(PL) · · [4 sources] · MASTOX

    Artificial intelligence will never gain consciousness. A Google DeepMind researcher exposes the Silicon Valley illusion. Tech giants are racing to...

    A senior researcher at Google DeepMind, Alexander Lerchner, has published a paper arguing that AI, particularly large language models, can simulate but not instantiate consciousness. His work, "The Abstraction Fallacy," posits that AI systems require human input to assign meaning and cannot achieve self-awareness without biological needs and a physical body. This perspective contrasts with the more optimistic AGI timelines often promoted by figures like DeepMind CEO Demis Hassabis. AI

    Artificial intelligence will never gain consciousness. A Google DeepMind researcher exposes the Silicon Valley illusion. Tech giants are racing to...

    IMPACT Challenges the prevailing narrative of imminent AGI, potentially influencing regulatory discussions and public perception of AI capabilities.