OpenAI faces billion-dollar backing while ex-researcher warns of doom
A former OpenAI researcher has sounded the alarm on the dangers of artificial intelligence development, even as OpenAI gears up for one of the largest private funding rounds in history. The two contrasting developments highlight the tension between AI's rapid advancement and the concerns surrounding its potential risks.
Steven Adler, who worked on AI safety at OpenAI for four years, recently shared his apprehensions on X (formerly Twitter), revealing that he left the company in November 2023 due to mounting concerns over the pace of AI progress. Describing his time at OpenAI as a "wild ride with lots of chapters," Adler expressed deep unease about the industry's race toward artificial general intelligence (AGI).
"Honestly, I'm pretty terrified by the pace of AI development these days. When I think about where I'll raise a future family or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?" Adler wrote.
He further criticized the current AI landscape, suggesting that no lab, including OpenAI, has a viable solution for AI alignment—the process of ensuring AI systems act in accordance with human values and safety standards. "An AGI race is a very risky gamble, with a huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely it is that anyone finds one in time," he stated.
Adler also warned that AI companies are caught in a high-risk equilibrium where competitive pressure forces them to accelerate development, often at the cost of safety. "Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes everyone to speed up. I hope labs can be candid about the real safety regulations needed to stop this," he added.
Despite his concerns, Adler remains engaged in the AI field and is taking a break before exploring new approaches to AI control, safety, and policy.
While Adler warns of AI’s risks, OpenAI is pushing ahead with significant expansion plans. According to sources, SoftBank Group is in talks to lead a funding round of up to $40 billion for OpenAI, valuing the company at an estimated $300 billion, including new funds. If finalized, this would represent one of the largest single funding rounds ever for a private company.
SoftBank has reportedly valued OpenAI at around $260 billion going into the investment round, a sharp rise from the $150 billion valuation the company received just months ago. The funding will likely be structured as convertible notes and may be contingent on OpenAI restructuring its corporate governance, reducing the influence of its non-profit arm.
Masayoshi Son, CEO of SoftBank, has been a major proponent of AI and is looking to solidify SoftBank’s presence in the sector. The investment would come on top of SoftBank’s existing commitments, including a $15 billion investment in Stargate, a joint venture between Oracle, OpenAI, and SoftBank aimed at maintaining U.S. leadership in AI development.
Reports indicate that SoftBank could invest between $15 billion and $25 billion directly into OpenAI, with some of the funds potentially allocated to OpenAI’s commitments to Stargate. The Wall Street Journal previously reported that OpenAI was seeking to raise close to $40 billion, with a potential valuation reaching as high as $340 billion.
Neither OpenAI nor SoftBank have officially commented on the investment talks, but the deal is expected to reinforce OpenAI’s dominance in the AI space at a time when competitors like the Chinese startup DeepSeek are gaining traction with cost-effective AI models.
A Tipping Point for AI?
The juxtaposition of Adler’s warnings and OpenAI’s massive funding push underscores the paradox of AI development today. While experts within the field raise red flags about the unchecked pace of innovation and potential risks, investors are pouring unprecedented sums into AI companies, signaling confidence in its commercial potential.
OpenAI’s growth trajectory has been meteoric, with its flagship product, ChatGPT, revolutionizing AI applications. However, questions remain about whether AI companies are prioritizing safety measures adequately in their rush to scale.
As OpenAI moves closer to securing billions in new funding, the concerns voiced by its former researcher serve as a stark reminder of the ethical and existential questions that AI pioneers must grapple with. Whether regulators, AI labs, and industry leaders can strike the right balance between innovation and caution remains to be seen.