On April 15, 2025, OpenAI, the AI powerhouse behind ChatGPT, announced the appointment of four advisors to its newly formed Nonprofit Commission: Dolores Huerta, Monica Lozano, Dr. Robert K. Ross, and Jack Oliver. The move, unveiled with a sense of purpose, is meant to steer the company’s philanthropic efforts toward tackling global challenges like health, education, public service, and scientific discovery. But the timing—coming on the heels of OpenAI’s controversial shift toward a for-profit model—has raised eyebrows. Is this a genuine recommitment to its nonprofit roots, or a carefully crafted response to mounting criticism? Let’s unpack what’s going on.
The four advisors bring a wealth of experience, each with a track record of making waves in their respective fields. OpenAI isn’t messing around with these picks—they’re heavyweights with deep ties to community-driven work.
- Dolores Huerta: A legendary labor activist, Huerta co-founded the United Farm Workers alongside Cesar Chavez in the 1960s. At 95, she’s still a force, known for her relentless advocacy for workers’ rights, gender equality, and social justice. Her rallying cry, “Sí, se puede” (Spanish for “Yes, you can”), became a global mantra for grassroots movements. Huerta’s presence signals OpenAI’s intent to ground its philanthropy in equity and community empowerment.
- Monica Lozano: A seasoned leader in education and media, Lozano served as president and CEO of the College Futures Foundation, championing access to higher education for underserved students. She’s also a board member at Apple and previously led ImpreMedia, a major Hispanic news outlet. Lozano’s expertise in bridging corporate and social impact makes her a natural fit for navigating OpenAI’s dual for-profit and nonprofit ambitions.
- Dr. Robert K. Ross: As the former president and CEO of The California Endowment, Ross spent nearly two decades driving health equity initiatives across underserved communities. A pediatrician by training, he’s been a vocal advocate for systemic change in public health. His perspective will likely push OpenAI to consider how AI can address structural inequalities—while avoiding the pitfalls of unintended harm.
- Jack Oliver: Described by OpenAI as a “leader in government, technology, business, and advocacy,” Oliver’s resume is less public but no less impressive. Sources point to his work in political strategy and tech policy, with a knack for building coalitions across sectors. His role could be pivotal in translating OpenAI’s lofty goals into actionable, community-focused outcomes.
Together, these advisors, under the leadership of convener Daniel Zingale—a veteran of California’s public policy scene—will guide OpenAI’s Nonprofit Commission. Their mandate? To gather insights from communities and organizations in health, science, education, and public services, and deliver recommendations to OpenAI’s board within 90 days. It’s a tight timeline for a task that’s as ambitious as it sounds.
To understand why this announcement matters, we need to rewind a bit. OpenAI was founded in 2015 as a nonprofit by a group that included Elon Musk, Sam Altman, and others, with a mission to advance AI research for the benefit of humanity. Fast-forward to 2019, and OpenAI began transitioning to a “capped-profit” model, allowing it to attract big investments—most notably from Microsoft. By December 2024, the company announced plans to fully convert into a public benefit corporation, a move that could value it at a staggering $300 billion with a potential $40 billion funding round.
This shift didn’t sit well with everyone. Musk, who left OpenAI’s board in 2018, sued the company and Altman in 2024, accusing them of abandoning the original mission for corporate profits. A federal judge denied Musk’s request for a preliminary injunction, but the case is headed to a jury trial in spring 2026. Meanwhile, former OpenAI employees filed an amicus brief in March 2025, arguing that the for-profit transition undermines the nonprofit’s controlling stake, which they say is “critical” to ensuring AI serves humanity over financial gain.
The backlash doesn’t stop there. In January 2025, a coalition of California-based nonprofits, including the San Francisco Foundation and Latino Prosperity, urged Attorney General Rob Bonta to investigate OpenAI’s restructuring. They fear the conversion could jeopardize nonprofit assets—potentially worth $157 billion—meant for public benefit. Citing precedents like the 1990s conversions of Blue Cross and Health Net, which funneled billions into charitable foundations, the coalition argues that OpenAI’s assets must remain dedicated to the public good.
OpenAI’s formation of the Nonprofit Commission feels like a direct response to this chorus of criticism. The company insists its nonprofit arm “isn’t going anywhere” and will be a “force multiplier” for communities tackling urgent challenges. According to OpenAI’s blog post, the commission will “receive learnings and input from the community on how OpenAI’s philanthropy can address long-term systemic issues, while also considering both the promise and risks of AI.” The advisors’ community-based expertise and the 90-day timeline suggest a serious effort to engage stakeholders and rebuild trust.
But skeptics aren’t convinced. Posts on X reflect a mix of optimism and doubt. Some, like @aicapital_io, praise the advisors as a step toward “AI for good,” while others question whether the commission is a PR move to deflect scrutiny. The timing—months after the for-profit announcement and amid legal battles—fuels speculation that OpenAI is trying to shore up its public image while pushing ahead with its corporate overhaul.
At its core, this story is about the tension between AI’s transformative potential and the risks of unchecked power. OpenAI’s tools, like ChatGPT and DALL-E, have already reshaped industries, from education to creative arts. But as AI becomes more pervasive, so do concerns about bias, job displacement, and ethical misuse. The advisors’ focus on systemic issues—health disparities, educational inequities, and public service gaps—suggests OpenAI wants to position itself as a leader in responsible AI deployment.
Take health, for example. Dr. Ross’s work at The California Endowment emphasized community-driven solutions to inequities, like access to care in low-income areas. Could OpenAI’s AI tools help analyze health data to improve outcomes in underserved regions? Possibly—but only if the technology is designed with input from those communities, not just tech execs in Silicon Valley. Similarly, Huerta’s advocacy for workers could push OpenAI to address how AI impacts labor markets, where automation threatens to displace millions of jobs.
The flip side is the risk. AI’s ability to amplify biases—racial, economic, or otherwise—is well-documented. A 2023 study by the Pew Research Center found that 52% of Americans are more concerned than excited about AI’s growth, citing fears of privacy violations and systemic harm. The advisors’ role in “considering the risks of AI” will be crucial, especially as OpenAI’s for-profit arm scales up with billions in new funding.
What’s next?
The Nonprofit Commission has 90 days to deliver its findings, which could set the tone for OpenAI’s philanthropy—and its public perception—for years to come. If the advisors can channel their expertise into concrete, community-driven initiatives, they might prove the skeptics wrong. But if the commission’s work feels like window dressing, OpenAI risks further alienating those who see it as prioritizing profits over purpose.
For now, the advisors are a signal that OpenAI is listening—or at least wants to appear that way. Huerta, Lozano, Ross, and Oliver have the chops to make a difference, but their success will depend on how much influence they’re given and whether OpenAI’s leadership is ready to act on their recommendations. As the company navigates its for-profit future, the world will be watching to see if it can stay true to its founding promise: AI for the benefit of all.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
