It’s not every day you hear a tech giant like OpenAI, the folks behind ChatGPT, say, “Yeah, we’re sticking with the nonprofit vibe.” But that’s exactly what’s happening. In a move that’s got the tech world buzzing, OpenAI chairman Bret Taylor announced the company is scrapping its plan to go full for-profit. Instead, it’s doubling down on its nonprofit roots while tweaking its commercial arm to operate as a public benefit corporation (PBC). Think of it like a company that’s still out to make money but has a legally binding promise to do some good for the world, too.
Related /
OpenAI, founded in 2015 by heavyweights like Elon Musk, Sam Altman, and others, began as a nonprofit with a lofty mission: to ensure artificial general intelligence (AGI)—think super-smart AI that can do pretty much anything a human can—benefits all of humanity. It wasn’t about making bank; it was about making sure AI didn’t turn into a sci-fi villain or a toy for the ultra-rich.
But as the years rolled on, building world-class AI got expensive. Like, “hundreds of billions, maybe trillions” expensive, according to Altman himself. To keep up, OpenAI created a capped-profit subsidiary in 2019, letting investors pour in cash with a catch: their returns were limited to 100 times their investment, and any extra profits would flow back to the nonprofit. It was a weird hybrid—part do-gooder, part Silicon Valley hustle.
This setup worked for a while, raising billions from big names like Microsoft. But it also drew scrutiny. Critics, including Musk (who’s no longer with OpenAI and now runs rival xAI), argued the structure was a nonprofit in name only, with too much focus on commercial gain. Add to that the drama of 2023, when OpenAI’s nonprofit board briefly fired Altman as CEO, citing concerns about mission drift, and you’ve got a company at a crossroads.
Related /
- Elon Musk tried to buy OpenAI, but the board shut him down
- OpenAI’s new blog post exposes Elon Musk’s early control ambitions
- Leaked OpenAI emails expose power struggles between Elon Musk and Sam Altman
- OpenAI’s former HQ now home to Elon Musk’s new AI venture, xAI
- Musk files lawsuit accusing OpenAI of abandoning ethics for profits
Fast forward to 2025, and OpenAI was ready to ditch the nonprofit entirely, or so it seemed. The plan was to become a full-fledged for-profit company, like Google or Meta, with no caps on investor returns. This would’ve made it easier to raise the kind of cash needed to compete in the AI arms race. Investors loved it—billions were already pledged, contingent on the switch.
But then the brakes were slammed. Taylor said the decision to abandon the for-profit shift came after “hearing from civic leaders” and chatting with the attorneys general of Delaware and California, who oversee OpenAI’s nonprofit status. These AGs weren’t just rubber-stamping the plan; they could’ve blocked it outright. According to The Wall Street Journal, both offices raised concerns about whether a for-profit OpenAI would still serve the public good, given its original charitable mission.
So, what’s the new plan? OpenAI’s keeping its nonprofit board in charge—the same one that ousted Altman briefly, so expect some spicy boardroom dynamics. The commercial subsidiary, previously a capped-profit LLC, is morphing into a PBC, a structure already used by AI rivals like Anthropic and xAI, as well as socially conscious brands like Patagonia.
A PBC is like a regular company but with a twist: it’s legally obligated to balance profit with a public mission. In OpenAI’s case, that mission is still about advancing AI for humanity’s benefit. According to OpenAI spokesperson Steve Sharpe, the PBC setup means investors and employees will own regular stock with no cap on appreciation. Translation: they can make a lot of money if OpenAI’s valuation skyrockets, which it probably will, given its last funding round pegged the company at $157 billion, per Bloomberg.
The nonprofit will hold a significant equity stake in the PBC, though the exact percentage is still being worked out with independent financial advisors. As the PBC grows, so will the nonprofit’s resources, which it plans to use for programs in health, education, and scientific discovery. Sharpe emphasized that the nonprofit board will appoint the PBC’s board but retain ultimate control, ensuring the mission doesn’t get lost in the shuffle.
Sam Altman’s take (and that equity question)
In a memo to employees, Altman framed the shift as a natural evolution. “OpenAI is not a normal company and never will be,” he wrote, doubling down on the mission to democratize AI. He argued the old capped-profit structure “made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies.” In other words, the AI landscape is crowded now, and OpenAI needs to play ball differently to stay ahead.
Altman’s memo also painted a utopian picture of AI as “a brain for the world,” empowering everyone from scientists to coders to regular folks facing healthcare challenges. But he was candid about the costs: building and scaling AI will take “hundreds of billions, maybe trillions” of dollars. The PBC structure, he suggested, is the best way to attract that kind of capital while staying true to the mission.
Curiously, Altman still won’t own equity in OpenAI, despite being its public face. Sharpe confirmed there’s “no plan” for him to get a stake, which is odd for a CEO leading a company valued in the hundreds of billions.
The transition isn’t a done deal yet. The California AG’s office said it’s “reviewing the new proposed plan” and remains in “continued conversations” with OpenAI. Delaware’s AG is likely doing the same. These reviews could shape the final structure, especially the nonprofit’s equity stake and how much control it really has over the PBC.
OpenAI’s decision isn’t just corporate reshuffling; it’s a signal about where AI is headed. The shift to a PBC aligns it with Anthropic and xAI, suggesting a trend among AI leaders to blend profit with purpose. But it also highlights the tension between idealism and pragmatism. Can a company really serve “all of humanity” while chasing trillion-dollar valuations? And what happens when the nonprofit board, with its history of bold moves, clashes with the PBC’s profit-driven goals?
For now, OpenAI is betting on a middle path: a nonprofit heart with a capitalist engine. Whether it can pull that off without losing its soul—or its edge in the AI race—remains to be seen. As Altman put it, creating AGI is their “brick in the path of human progress.” The question is, who gets to walk that path, and at what cost?
Sam Altman’s full memo to employees:
OpenAI is not a normal company and never will be.
Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We started out staring at each other around a kitchen table, wondering what research we should do. Back then, we did not contemplate products, a business model. We could not contemplate the direct benefits of AI being used for medical advice, learning, productivity, and much more, or the needs for hundreds of billions of dollars of compute to train models and serve users.
We did not really know how AGI was going to get built, or used. A lot of people could imagine an oracle that could tell scientists and presidents what to do, and although it could be incredibly dangerous, maybe those few people could be trusted with it.
A lot of people around OpenAI in the early days thought AI should only be in the hands of a few trusted people who could “handle it”.
We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. It will of course not be all used for good, but we trust humanity and think the good will outweigh the bad by orders of magnitude.
We are committed to this path of democratic AI. We want to put incredible tools in the hands of everyone. We are amazed and delighted by what they are creating with our tools, and how much they want to use them. We want to open source very capable models. We want to give our users a great deal of freedom in how we let them use our tools within broad boundaries, even if we don’t always share the same moral framework, and to let our users make decisions about the behavior of ChatGPT.
We believe this is the best path forward—AGI should enable all of humanity to benefit each other. We realize some people have very different opinions.
We want to build a brain for the world and make it super easy for people to use for whatever they want (subject to few restrictions; freedom shouldn’t impinge on other people’s freedom, for example).
People are using ChatGPT to increase their productivity as scientists, coders, and much more. People are using ChatGPT to solve serious healthcare challenges they are facing and learn more than ever before. People are using ChatGPT to get advice about how to handle difficult situations. We are very proud to offer a service that is doing so much for so many people; it is the one of most direct fulfillments of our mission we can imagine.
But they want to use it much more; we currently cannot supply nearly as much AI as the world wants and we have to put usage limits on our systems and run them slowly. As the systems become more capable, they will want to use it even more, for even more wonderful things.
We had no idea this was going to be the state of the world when we launched our research lab almost a decade ago. But now that we see this picture, we are thrilled.
It is time for us to evolve our structure. There are three things we want to accomplish:
- We want to be able to operate and get resources in such a way that we can make our services broadly available to all of humanity, which currently requires hundreds of billions of dollars and may eventually require trillions of dollars. We believe this is the best way for us to fulfill our mission and to get people to create massive benefits for each other with these new tools.
- We want our nonprofit to be the largest and most effective nonprofit in history that will be focused on using AI to enable the highest-leverage outcomes for people.
- We want to deliver beneficial AGI. This includes contributing to the shape of safety and alignment; we are proud of our track record with the systems we have launched, the alignment research we have done, processes like red teaming, and transparency into model behavior with innovations like the model spec. As AI accelerates, our commitment to safety grows stronger. We want to make sure democratic AI wins over authoritarian AI.
We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware. We look forward to advancing the details of this plan in continued conversation with them, Microsoft, and our newly appointed nonprofit commissioners.
OpenAI was founded as a nonprofit, is today a nonprofit that oversees and controls the for-profit, and going forward will remain a nonprofit that oversees and controls the for-profit. That will not change.
The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.
Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission. And as the PBC grows, the nonprofit’s resources will grow, so it can do even more. We’re excited to soon get recommendations from our nonprofit commission on how we can help make sure AI benefits everyone—not just a few. Their ideas will focus on how our nonprofit work can support a more democratic AI future, and have real impact in areas like health, education, public services, and scientific discovery.
We believe this sets us up to continue to make rapid, safe progress and to put great AI in the hands of everyone. Creating AGI is our brick in the path of human progress; we can’t wait to see what bricks you will add next.
Sam Altman
May 2025
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
