Editorial note: At GadgetBond, we typically steer clear of overtly political content. However, when technology and gadgets, even the unconventional kind, intersect with current events, we believe it warrants our attention. Read our statement
On Monday, Anthropic quietly but decisively threw its weight behind SB 53, the latest California effort to regulate the riskiest class of artificial-intelligence systems. The endorsement — published on the company’s blog and on X/Twitter — marks one of the first times a major frontier-AI developer has publicly backed a state bill that would force the industry to lock in transparency and safety practices in law.
For lawmakers and safety advocates pushing the measure, Anthropic’s support is more than symbolic. SB 53 — formally titled the Transparency in Frontier Artificial Intelligence Act in some summaries and posted on the state’s legislative site — would require the largest AI model developers to publish a formal safety framework, file public safety and security reports before deploying powerful models, and put in place protections for whistleblowers who flag dangerous practices. The bill specifically targets so-called “frontier” or “foundation” models operated by large developers.
Anthropic’s calculus — from its public post and subsequent social posts — is a familiar one inside the AI policy debate: the company still prefers a single federal approach, but it’s not willing to wait for Washington. As Anthropic put it, in language the company highlighted on X, “The question isn’t whether we need AI governance — it’s whether we’ll develop it thoughtfully today or reactively tomorrow.” That argument helped sell the company’s endorsement to a skeptical outside world.
What SB 53 would do — and what it avoids
SB 53 focuses on the tail of AI risk: catastrophic harms that could kill dozens or cost hundreds of millions of dollars — rather than everyday harms such as fraud, disinformation, or biased hiring models. Under the bill’s text, large developers would be required to document their testing procedures for catastrophic risks, disclose certain safety incidents to the attorney general, and maintain internal safety protocols for covered models. The legislation also carves out whistleblower protections so employees who raise alarms about a genuine, substantial danger are shielded from retaliation.
That narrowness is intentional. SB 53’s drafters say they want to avoid sweeping mandates that reach every AI use-case, instead targeting high-impact scenarios where a model could materially enable bioweapon design, major cyberattacks, or similarly devastating outcomes.
A strategic endorsement
Anthropic’s sign-on comes at a politically sensitive moment. Last year, California advanced a much broader piece of legislation — SB 1047 — that sought to impose stricter safety obligations on frontier models and was ultimately vetoed by Governor Gavin Newsom. Newsom’s veto, delivered in September 2024, cited concerns that the earlier bill’s framework might create a misleading regulatory line based only on computational thresholds and leave gaps for smaller but dangerous deployments. That history loomed over SB 53’s drafting and is part of why proponents have tried to craft a narrower, more defensible approach this session.
Inside the industry, that narrower approach has prompted an awkward split: some firms and policy teams have leaned into the idea that reasonable, targeted rules are acceptable; others — and the trade groups that represent them — keep warning about costs, constitutional problems, and the risk of driving startups out of California.
The political tug-of-war
The opposition is vocal and well-resourced. Venture and tech-policy outfits — including high-profile voices connected to Andreessen Horowitz and Y Combinator — have argued that state-level rules risk overreach, create compliance headaches for smaller companies, and could clash with the U.S. Constitution’s Commerce Clause. Those groups and some Big Tech players have pushed for federal solutions instead of a state-by-state patchwork.
At the same time, the Biden and Trump administrations have signaled different stances on state-level action. Federal pushes to limit or coordinate state laws have repeatedly entered the conversation — creating the prospect of legal clashes if states move first. A provision floated in some federal bills and appropriations discussions would seek to constrain state AI rules, a flashpoint that has only increased the urgency among state lawmakers who argue that technology is moving faster than federal politics.
OpenAI, for its part, has been lobbying the governor directly. In August, OpenAI’s chief global affairs officer, Chris Lehane, sent a letter urging Newsom to align California’s approach with international frameworks and to avoid duplicative or punitive state mandates that might push startups out of California — a letter critics said did not name SB 53 explicitly but was read as part of the broader industry push. OpenAI’s former head of policy research, Miles Brundage, blasted the letter on X as “filled with misleading garbage about SB 53 and AI policy generally,” underscoring how personal and public the lobbying fight has become.
Experts see SB 53 as comparatively modest
Even many skeptics of earlier, wider California bills have told reporters that SB 53 is a more modest, pragmatic attempt. Dean Ball, a former White House AI policy adviser who has been critical of SB 1047, recently described SB 53’s drafters as showing “respect for technical reality” and suggested the bill’s more restrained posture gives it a shot at becoming law. That assessment has helped proponents frame the bill as technically minded, not theatrical.
Still, the meat of the fight is technical and legal: opponents warn that some disclosure and audit requirements could expose trade secrets, create security risks if reports are misused, or simply saddle smaller teams with compliance burdens that stifle innovation. Trade groups like the Consumer Technology Association and the Software & Information Industry Association have urged Newsom to oppose elements they consider unworkable. Supporters counter that the largest frontier developers already publish safety reports voluntarily; the bill’s purpose is to make the most important disclosures enforceable rather than optional.
Where SB 53 stands and what comes next
As of early September, lawmakers amended SB 53 multiple times in committee and on the Assembly floor; the official legislative page shows several amendments filed through September 5. The bill’s authors have been negotiating language with stakeholders — and opponents have sought changes to or removal of audit provisions and other reporting rules. That back-and-forth is why SB 53’s path is still uncertain: it needs a final Assembly vote and, if it passes, must win the governor’s signature to become law.
For advocates of tighter guardrails, Anthropic’s endorsement will be used as proof that some leading developers can live under clearer rules. For critics, it will read as a strategic gesture — or, at least, an industry fracture. Either way, SB 53 has suddenly become the most consequential battleground over where and how the United States will draw its first lines around the riskiest uses of AI.
If the bill does reach Governor Gavin Newsom’s desk, he’ll again face the political calculus that scuttled SB 1047: can a state bill credibly manage catastrophic risk without hobbling innovation or triggering preemption fights with Washington? Lawmakers on both sides now know the answer to that question will help determine whether California sets the standard — or gets dragged into a protracted legal and political showdown.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
