OpenAI secures $40B funding, advances AI safety research and regulation
ByPulseAugur Editorial·
Summary by gemini-2.5-flash-lite
from 36 sources
OpenAI has announced significant funding rounds, with one raising $6.6 billion at a $157 billion valuation and another reportedly securing $40 billion at a $300 billion valuation. The company is also focusing on AI safety, releasing a paper on frontier AI regulation and emphasizing the need for social scientists in AI alignment research. Additionally, OpenAI is offering grants for research into AI and mental health, and providing guidance on the responsible use of its ChatGPT models.
AI
IMPACTOpenAI's substantial funding and focus on safety and regulation signal continued rapid advancement and a push towards responsible AGI development.
RANK_REASON
Multiple significant funding rounds and policy papers from a major AI lab.
OpenAI is awarding up to $2 million in grants for research at the intersection of AI and mental health. The program supports projects that study real-world risks, benefits, and applications to improve safety and well-being.
Today we’re announcing new funding—$40B at a $300B post-money valuation, which enables us to push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.
We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and benefits, technical collaboration, increased transparency, and incentivizing standar…
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psycholo…
arXiv:2604.24618v1 Announce Type: new Abstract: We evaluate the propensity of frontier models to sabotage or refuse to assist with safety research when deployed as AI research agents within a frontier AI company. We apply two complementary evaluations to four Claude models (Mytho…
We evaluate the propensity of frontier models to sabotage or refuse to assist with safety research when deployed as AI research agents within a frontier AI company. We apply two complementary evaluations to four Claude models (Mythos Preview, Opus 4.7 Preview, Opus 4.6, and Sonne…
Artificial intelligence now decides who receives a loan, who is flagged for criminal investigation, and whether an autonomous vehicle brakes in time. Governments have responded: the EU AI Act, the NIST Risk Management Framework, and the Council of Europe Convention all demand tha…
<p style="text-align: center;"><a class="button button-primary button-wide max-width-100" href="https://metr.org/frontier-ai-regulations.pdf">View as PDF</a></p> <p>Frontier AI developers such as OpenAI, Google, Anthropic, xAI, and others are governed by safety and security oblig…
<p>A number of developers of large foundation models have committed to corporate protocols that lay out how they will evaluate their models for severe risks and mitigate these risks with information security measures, deployment safeguards, and accountability practices. Beginning…
<p><span>With the release of Claude Mythos, it feels like we are approaching the end-game of AI safety, where the number of parties that can make a real impact shrinks down to the handful of labs at the frontier, a few companies too critical to exclude from the conversation, and …
<p><span>In February, the </span><a href="https://www.swiftcentre.org/"><b><span>Swift Centre for Applied Forecasting</span></b></a><span> launched a competition designed to bridge the gap between abstract AI safety research and the realities of government decision-making. </span…
<p><span>Hello! I have found myself in a position that I think could benefit AI safety/alignment, and I figured I would ask here for suggestions on how to most effectively use it.</span><br /><br /><span>I am eployed at a top 25 public research university. I don't really have a o…
<p><span>We are organizing a symposium on the intersection of algorithmic information theory and machine learning July 27-29th at Oxford!</span></p><p><span>See the announcement here for details: </span><a href="https://sites.google.com/site/boumedienehamzi/third-symposium-on-mac…
The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.
<p><img alt="Amazon is introducing automatic prompt optimization for its Bedrock AI service, which is designed to simplify the time-consuming process of manual prompt engineering and improve performance by up to 22 percent, depending on the task. The new feature is available acro…
Ars Technica — AI
TIER_1·Rafe Rosner-Uddin, Financial Times·
<p>The post <a href="https://80000hours.org/2026/04/ai-safety-needs-more-than-engineers/">AI safety needs more than engineers</a> appeared first on <a href="https://80000hours.org">80,000 Hours</a>.</p>
<p>While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that b…
Because using AI tools is definitely a choice for some people. Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2026/05/ama zon-employees-are-tokenmaxxing-due-to-pressure-to-use-ai-tools/ # Amazon # TokenMaxxing # AI # Labor # Tech
Amazon staff use AI tool for unnecessary tasks to inflate usage scores In-house MeshClaw tool enables employees to delegate jobs to AI agents and climb company’s AI leaderboard Employees at # Amazon , # Meta , and # Microsoft admit to using # AI unnecessarily to pump up internal …
📰 Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools An anonymous reader quotes a report from the Financial Times (via Ars Technica): Amazon employees are using an internal AI tool to automate non-essential tasks in a bid to show managers they are us... 📰 Source:…
Amazon employees admit to using AI unnecessarily to pump up internal usage scores — workers complain of intense pressure to use AI tools Employees at Amazon, Meta, and Microslop have been gaming AI usage metrics. https://www. tomshardware.com/tech-industry /big-tech/big-tech-has-…
Amazon staff are reportedly using AI tools for unnecessary tasks just to boost internal usage scores, a sharp reminder that AI adoption metrics can be gamed. - https:// news.google.com/rss/articles/C BMijAFBVV95cUxQeWNCaHJUTlpVeXM5N09ob3pDNkszaGRoaERoUDhwZkF4SGV3NnJoLVNLNVYxWXNlN…
Wow, "tokenmaxxing" is the new buzzword for # Amazon employees desperately pretending to know # AI 🧠💼—because who needs actual # skills when you can just fake it till you make it! 🙄 AI tools: helping tech workers maintain the # illusion of # competence since yesterday. 😂 https://…
Amazon employees are "tokenmaxxing" due to pressure to use AI tools https:// arstechnica.com/ai/2026/05/ama zon-employees-are-tokenmaxxing-due-to-pressure-to-use-ai-tools/ # HackerNews # Amazon # Employees # AI # Tools # Tokenmaxxing # Pressure # Work # Culture
Some Amazon employees allegedly inflate AI token usage to meet internal weekly AI targets — using an in-house tool for... unnecessary tasks. When KPIs meet AI quotas, creativity finds a way. It's a fascinating reminder that incentive design shapes behavior as much as any security…
Amerykański Departament Handlu rozszerza program audytów bezpieczeństwa sztucznej inteligencji. Do grona firm współpracujących z CAISI dołączyły Google DeepMind, Microsoft oraz xAI, co pozwoli rządowym ekspertom na testowanie modeli przed ich rynkowym debiutem. Inicjatywa ta ma n…