PulseAugur
LIVE 23:16:32
significant · [36 sources] ·
0
significant

OpenAI secures $40B funding, advances AI safety research and regulation

OpenAI has announced significant funding rounds, with one raising $6.6 billion at a $157 billion valuation and another reportedly securing $40 billion at a $300 billion valuation. The company is also focusing on AI safety, releasing a paper on frontier AI regulation and emphasizing the need for social scientists in AI alignment research. Additionally, OpenAI is offering grants for research into AI and mental health, and providing guidance on the responsible use of its ChatGPT models. AI

Summary written by gemini-2.5-flash-lite from 36 sources. How we write summaries →

IMPACT OpenAI's substantial funding and focus on safety and regulation signal continued rapid advancement and a push towards responsible AGI development.

RANK_REASON Multiple significant funding rounds and policy papers from a major AI lab.

Read on OpenAI News →

OpenAI secures $40B funding, advances AI safety research and regulation

COVERAGE [36]

  1. OpenAI News TIER_1 ·

    Responsible and safe use of AI

    Learn how to use AI responsibly with best practices for safety, accuracy, and transparency when using tools like ChatGPT.

  2. OpenAI News TIER_1 ·

    Funding grants for new research into AI and mental health

    OpenAI is awarding up to $2 million in grants for research at the intersection of AI and mental health. The program supports projects that study real-world risks, benefits, and applications to improve safety and well-being.

  3. OpenAI News TIER_1 ·

    New funding to build towards AGI

    Today we’re announcing new funding—$40B at a $300B post-money valuation, which enables us to push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.

  4. OpenAI News TIER_1 ·

    New funding to scale the benefits of AI

    We are making progress on our mission to ensure that artificial general intelligence benefits all of humanity.

  5. OpenAI News TIER_1 ·

    Frontier AI regulation: Managing emerging risks to public safety

  6. OpenAI News TIER_1 ·

    Our approach to AI safety

    Ensuring that AI systems are built, deployed, and used safely is critical to our mission.

  7. OpenAI News TIER_1 ·

    Why responsible AI development needs cooperation on safety

    We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and benefits, technical collaboration, increased transparency, and incentivizing standar…

  8. OpenAI News TIER_1 ·

    AI safety needs social scientists

    We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psycholo…

  9. OpenAI News TIER_1 ·

    AI safety via debate

    We’re proposing an AI safety technique which trains agents to debate topics with one another, using a human to judge who wins.

  10. Hugging Face Blog TIER_1 ·

    Democratizing AI Safety with RiskRubric.ai

  11. arXiv cs.AI TIER_1 · Robert Kirk, Alexandra Souly, Kai Fronsdal, Abby D'Cruz, Xander Davies ·

    Evaluating whether AI models would sabotage AI safety research

    arXiv:2604.24618v1 Announce Type: new Abstract: We evaluate the propensity of frontier models to sabotage or refuse to assist with safety research when deployed as AI research agents within a frontier AI company. We apply two complementary evaluations to four Claude models (Mytho…

  12. arXiv cs.AI TIER_1 · Xander Davies ·

    Evaluating whether AI models would sabotage AI safety research

    We evaluate the propensity of frontier models to sabotage or refuse to assist with safety research when deployed as AI research agents within a frontier AI company. We apply two complementary evaluations to four Claude models (Mythos Preview, Opus 4.7 Preview, Opus 4.6, and Sonne…

  13. arXiv cs.AI TIER_1 · Gadi Perl ·

    Bounding the Black Box: A Statistical Certification Framework for AI Risk Regulation

    Artificial intelligence now decides who receives a loan, who is flagged for criminal investigation, and whether an autonomous vehicle brakes in time. Governments have responded: the EU AI Act, the NIST Risk Management Framework, and the Council of Europe Convention all demand tha…

  14. METR (Model Evaluation & Threat Research) TIER_1 ·

    Frontier AI safety regulations: A reference for lab staff

    <p style="text-align: center;"><a class="button button-primary button-wide max-width-100" href="https://metr.org/frontier-ai-regulations.pdf">View as PDF</a></p> <p>Frontier AI developers such as OpenAI, Google, Anthropic, xAI, and others are governed by safety and security oblig…

  15. METR (Model Evaluation & Threat Research) TIER_1 ·

    Common Elements of Frontier AI Safety Policies (December 2025 Update)

    <p>A number of developers of large foundation models have committed to corporate protocols that lay out how they will evaluate their models for severe risks and mitigate these risks with information security measures, deployment safeguards, and accountability practices. Beginning…

  16. METR (Model Evaluation & Threat Research) TIER_1 ·

    Frontier AI Safety Policies

  17. LessWrong (AI tag) TIER_1 · Sturb ·

    Contributing to Technical Research in the AI Safety End Game

    <p><span>With the release of Claude Mythos, it feels like we are approaching the end-game of AI safety, where the number of parties that can make a real impact shrinks down to the handful of labs at the frontier, a few companies too critical to exclude from the conversation, and …

  18. LessWrong (AI tag) TIER_1 · James Newport ·

    Bridging the Gap on AI Safety Policy

    <p><span>In February, the </span><a href="https://www.swiftcentre.org/"><b><span>Swift Centre for Applied Forecasting</span></b></a><span> launched a competition designed to bridge the gap between abstract AI safety research and the realities of government decision-making. </span…

  19. LessWrong (AI tag) TIER_1 · pinkerton ·

    How could I best use this opportunity? (AI Safety)

    <p><span>Hello! I have found myself in a position that I think could benefit AI safety/alignment, and I figured I would ask here for suggestions on how to most effectively use it.</span><br /><br /><span>I am eployed at a top 25 public research university. I don't really have a o…

  20. LessWrong (AI tag) TIER_1 · Cole Wyeth ·

    Third Symposium on AIT & ML: AI Safety Applications

    <p><span>We are organizing a symposium on the intersection of algorithmic information theory and machine learning July 27-29th at Oxford!</span></p><p><span>See the announcement here for details: </span><a href="https://sites.google.com/site/boumedienehamzi/third-symposium-on-mac…

  21. Future of Life Institute TIER_1 · Chase Hardin ·

    AI Safety Index Released

    The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.

  22. The Decoder TIER_1 · Maximilian Schreiner ·

    "Tokenmaxxing" spreads at Amazon as employees game internal AI leaderboards

    <p><img alt="Amazon is introducing automatic prompt optimization for its Bedrock AI service, which is designed to simplify the time-consuming process of manual prompt engineering and improve performance by up to 22 percent, depending on the task. The new feature is available acro…

  23. Ars Technica — AI TIER_1 · Rafe Rosner-Uddin, Financial Times ·

    Amazon employees are "tokenmaxxing" due to pressure to use AI tools

    Workers are using an internal AI tool to automate non-essential tasks.

  24. 80,000 Hours TIER_1 · Avital Morris ·

    AI safety needs more than engineers

    <p>The post <a href="https://80000hours.org/2026/04/ai-safety-needs-more-than-engineers/">AI safety needs more than&nbsp;engineers</a> appeared first on <a href="https://80000hours.org">80,000 Hours</a>.</p>

  25. Practical AI TIER_1 · Practical AI LLC ·

    Staving off disaster through AI safety research

    <p>While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that b…

  26. Fortune TIER_1 · Eva Roytburg ·

    ‘That doesn’t sound very healthy’: Amazon’s reported tokenmaxxing might gamify AI usage, analyst warns

    Amazon employees are reportedly gaming internal AI leaderboards to inflate their token counts.

  27. Tom's Hardware TIER_1 · Luke James ·

    Amazon employees admit to using AI unnecessarily to pump up internal usage scores — workers complain of intense pressure to use AI tools

    Amazon is the latest hyperscaler where employees have been caught inflating AI token consumption to hit internal usage targets.

  28. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Because using AI tools is definitely a choice for some people. Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2

    Because using AI tools is definitely a choice for some people. Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2026/05/ama zon-employees-are-tokenmaxxing-due-to-pressure-to-use-ai-tools/ # Amazon # TokenMaxxing # AI # Labor # Tech

  29. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Amazon staff use AI tool for unnecessary tasks to inflate usage scores In-house MeshClaw tool enables employees to delegate jobs to AI agents and climb company’

    Amazon staff use AI tool for unnecessary tasks to inflate usage scores In-house MeshClaw tool enables employees to delegate jobs to AI agents and climb company’s AI leaderboard Employees at # Amazon , # Meta , and # Microsoft admit to using # AI unnecessarily to pump up internal …

  30. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    📰 Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon em

    📰 Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are us... 📰 Source:…

  31. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Amazon employees admit to using AI unnecessarily to pump up internal usage scores — workers complain of intense pressure to use AI tools Employees at Amazon, Me

    Amazon employees admit to using AI unnecessarily to pump up internal usage scores — workers complain of intense pressure to use AI tools Employees at Amazon, Meta, and Microslop have been gaming AI usage metrics. https://www. tomshardware.com/tech-industry /big-tech/big-tech-has-…

  32. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Amazon staff are reportedly using AI tools for unnecessary tasks just to boost internal usage scores, a sharp reminder that AI adoption metrics can be gamed. -

    Amazon staff are reportedly using AI tools for unnecessary tasks just to boost internal usage scores, a sharp reminder that AI adoption metrics can be gamed. - https:// news.google.com/rss/articles/C BMijAFBVV95cUxQeWNCaHJUTlpVeXM5N09ob3pDNkszaGRoaERoUDhwZkF4SGV3NnJoLVNLNVYxWXNlN…

  33. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Wow, "tokenmaxxing" is the new buzzword for # Amazon employees desperately pretending to know # AI 🧠💼—because who needs actual # skills when you can just fake i

    Wow, "tokenmaxxing" is the new buzzword for # Amazon employees desperately pretending to know # AI 🧠💼—because who needs actual # skills when you can just fake it till you make it! 🙄 AI tools: helping tech workers maintain the # illusion of # competence since yesterday. 😂 https://…

  34. Mastodon — fosstodon.org TIER_1 · [email protected] ·

    Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2026/05/ama zon-employees-are-tokenmaxxing-due-to-pressure-to-us

    Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2026/05/ama zon-employees-are-tokenmaxxing-due-to-pressure-to-use-ai-tools/ # HackerNews # Amazon # Employees # AI # Tools # Tokenmaxxing # Pressure # Work # Culture

  35. Mastodon — mastodon.social TIER_1 · [email protected] ·

    Some Amazon employees allegedly inflate AI token usage to meet internal weekly AI targets — using an in-house tool for... unnecessary tasks. When KPIs meet AI q

    Some Amazon employees allegedly inflate AI token usage to meet internal weekly AI targets — using an in-house tool for... unnecessary tasks. When KPIs meet AI quotas, creativity finds a way. It's a fascinating reminder that incentive design shapes behavior as much as any security…

  36. Mastodon — mastodon.social TIER_1 Polski(PL) · aisight ·

    US Department of Commerce expands AI safety audit program. Google DeepMind joins CAISI partner companies

    Amerykański Departament Handlu rozszerza program audytów bezpieczeństwa sztucznej inteligencji. Do grona firm współpracujących z CAISI dołączyły Google DeepMind, Microsoft oraz xAI, co pozwoli rządowym ekspertom na testowanie modeli przed ich rynkowym debiutem. Inicjatywa ta ma n…