Artificial intelligence will never gain consciousness. A Google DeepMind researcher exposes the Silicon Valley illusion. Tech giants are racing to...
A senior researcher at Google DeepMind, Alexander Lerchner, has published a paper arguing that AI, particularly large language models, can simulate but not instantiate consciousness. His work, "The Abstraction Fallacy," posits that AI systems require human input to assign meaning and cannot achieve self-awareness without biological needs and a physical body. This perspective contrasts with the more optimistic AGI timelines often promoted by figures like DeepMind CEO Demis Hassabis. AI
IMPACT Challenges the prevailing narrative of imminent AGI, potentially influencing regulatory discussions and public perception of AI capabilities.