OpenAI is no ordinary company. Founded as a nonprofit in 2015, its founding announcement predicted that AI could “reach human performance on virtually every intellectual task,” creating the need for an organization that would “build value for everyone rather than shareholders.”
In the following years, AI development became increasingly expensive, and so in 2019, OpenAI created a commercial subsidiary that could raise investment capital while remaining bound to the nonprofit’s charitable mission.
It was around the time of this transition that I joined the company as a junior researcher. My team at OpenAI was focused on reinforcement learning: a process in which an AI system interacts with a simulated environment, and learns to improve through trial and error. We applied this method first to video games, and then to large language models, shaping them using human feedback into early versions of ChatGPT. The same techniques are used today to train systems enjoyed by hundreds of millions of users.
The work of me and my team was predicated on the key commitment made by OpenAI in its 2019 transition: a legally binding duty to put the interests of the public ahead of those of investors. But this commitment is now under threat, as OpenAI pursues a restructuring that would remove caps on investor profits and water down its obligations to its charitable mission.
Read More: What Happens When AI Replaces Workers?
It is tempting to respond to this cynically, by shrugging it off: once enough money was at stake, the organization’s transformation from a mission-first nonprofit into a large tech company was inevitable. But such a narrative lets the company off the hook for walking back its promises to the public. What’s more, OpenAI’s restructuring plans are still under scrutiny from elected officials, so the public has a reason and a right to stand up for its interests.
I still own equity in OpenAI, but despite this vested interest in the organization’s financial success, I believe the public’s interests need protecting. Before I left the company in 2023, products were already being released on a tight schedule, but this only seems to have intensified, to the extent that employees have warned about safety testing being rushed. This hasty approach is now leading to releases being rolled back for “fueling anger, urging impulsive actions, or reinforcing negative emotions.” Meanwhile, decision-makers have profit incentives that don’t fully account for these downsides.
The responsibility for holding OpenAI to its charitable mission falls to its nonprofit board of directors. Unfortunately, many have raised concerns that the current nonprofit board lacks the independence and resources needed to perform this role effectively. Since 2019, OpenAI’s commercial operations have grown from nonexistent to making billions in annual revenue. By contrast, the nonprofit still has no independent staff of its own, and its board members are too busy running their own companies or academic labs to provide meaningful oversight. To add to this, OpenAI’s proposed restructuring now threatens to weaken the board’s authority when it instead needs reinforcing.
But there is another way forward. Before proceeding with any restructuring, the board’s immediate priority should be to hire a nonprofit CEO to build out an independent team, free from financial conflicts of interest and accountable to the board alone. The purpose of this team would be to support the board in its oversight duties, and it could grow to perform a number of critical functions.
Read More: The AI Revolution Isn’t Possible Without an Energy Revolution
First, the nonprofit could conduct reviews of executive performance by the standards of the organization’s charitable mission. These could be used by the board to determine executive compensation packages, helping to align incentives at the top of the company.
Second, the nonprofit could provide the board with independent expertise on safety and security. It could review internal safety testing conducted under the company’s Preparedness Framework, as well as external safety testing and third-party audits of the company’s security practices. Frontier deployments could be subject to board approval, supported by summaries of these reviews.
Third, the nonprofit could enhance transparency. By maintaining its own communication channel with the public, it could keep the public informed about the company’s safety and security practices, important changes to internal policies or model specifications, and new capabilities of public concern. It could also conduct and publish its own postmortems of safety incidents, and manage the internal whistleblower hotline.
Finally, the nonprofit could take charge of any activities where profit incentives are likely to diverge from the public interest, such as the company’s lobbying efforts. It could also begin to use the nonprofit’s vast financial resources (deriving from its majority stake in the company) to make grants to support both beneficial uses of AI and risk mitigation efforts.
As AI development continues apace and with no prospects for meaningful Federal regulation on the horizon, an empowered nonprofit board is more important than ever. The nonprofit’s activities could serve not only as oversight for OpenAI itself, but also as a blueprint for others. For instance, standards for transparency and third-party review piloted at OpenAI could provide a starting point for future regulation.
OpenAI’s next steps will determine the trajectory of the company for years to come. Instead of irreversibly abandoning its commitments to the public’s interest, it could step back from the brink and reaffirm them, by enhancing the nonprofit board’s ability to fulfill its duty of oversight.
OpenAI’s nonprofit soul can still be saved—but it may require the public to make itself heard as the organization’s rightful beneficiary.