On Monday, Anthropic introduced an official endorsement of SB 53, a California invoice from state senator Scott Wiener that will impose first-in-the-nation transparency necessities on the world’s largest AI mannequin builders. Anthropic’s endorsement marks a uncommon and main win for SB 53, at a time when main tech teams just like the Shopper Expertise Affiliation (CTA) and Chamber for Progress are lobbying in opposition to the invoice.
“Whereas we consider that frontier AI security is greatest addressed on the federal degree as an alternative of a patchwork of state rules, highly effective AI developments gained’t look forward to consensus in Washington,” mentioned Anthropic in a weblog put up. “The query isn’t whether or not we’d like AI governance — it’s whether or not we’ll develop it thoughtfully immediately or reactively tomorrow. SB 53 provides a strong path towards the previous.”
If handed, SB 53 would require frontier AI mannequin builders like OpenAI, Anthropic, Google, and xAI to develop security frameworks, in addition to launch public security and safety reviews earlier than deploying highly effective AI fashions. The invoice would additionally set up whistleblower protections to staff who come ahead with security issues.
Senator Wiener’s invoice particularly focuses on limiting AI fashions from contributing to “catastrophic dangers,” which the invoice defines because the dying of at the very least 50 individuals or greater than a billion {dollars} in damages. SB 53 focuses on the intense aspect of AI threat — limiting AI fashions from getting used to offer expert-level help within the creation of organic weapons or being utilized in cyberattacks — quite than extra near-term issues like AI deepfakes or sycophancy.
California’s Senate permitted a previous model of SB 53 however nonetheless wants to carry a closing vote on the invoice earlier than it will possibly advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the invoice to date, though he vetoed Senator Weiner’s final AI security invoice, SB 1047.
Payments regulating frontier AI mannequin builders have confronted important pushback from each Silicon Valley and the Trump administration, which each argue that such efforts might restrict America’s innovation within the race in opposition to China. Buyers like Andreessen Horowitz and Y Combinator led among the pushback in opposition to SB 1047, and in latest months, the Trump administration has repeatedly threatened to dam states from passing AI regulation altogether.
One of the frequent arguments in opposition to AI security payments are that states ought to go away the matter as much as federal governments. Andreessen Horowitz’s head of AI coverage, Matt Perault, and chief authorized officer, Jai Ramaswamy, printed a weblog put up final week arguing that lots of immediately’s state AI payments threat violating the Structure’s Commerce Clause — which limits state governments from passing legal guidelines that transcend their borders and impair interstate commerce.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Nevertheless, Anthropic co-founder Jack Clark argues in a put up on X that the tech business will construct highly effective AI techniques within the coming years and may’t look forward to the federal authorities to behave.
“We’ve got lengthy mentioned we would like a federal commonplace,” mentioned Clark. “However within the absence of that this creates a strong blueprint for AI governance that can’t be ignored.”
OpenAI’s chief international affairs officer, Chris Lehane, despatched a letter to Governor Newsom in August arguing that he shouldn’t move any AI regulation that will push startups out of California — though the letter didn’t point out SB 53 by identify.
OpenAI’s former head of coverage analysis, Miles Brundage, mentioned in a put up on X that Lehane’s letter was “stuffed with deceptive rubbish about SB 53 and AI coverage typically.” Notably, SB 53 goals to solely regulate the world’s largest AI firms — significantly ones that generated a gross income of greater than $500 million.
Regardless of the criticism, coverage consultants say SB 53 is a extra modest method than earlier AI security payments. Dean Ball, a senior fellow on the Basis for American Innovation and former White Home AI coverage adviser, mentioned in an August weblog put up that he believes SB 53 has a very good likelihood now of turning into legislation. Ball, who criticized SB 1047, mentioned SB 53’s drafters have “proven respect for technical actuality,” in addition to a “measure of legislative restraint.”
Senator Wiener beforehand mentioned that SB 53 was closely influenced by an knowledgeable coverage panel Governor Newsom convened — co-led by main Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on how one can regulate AI.
Most AI labs have already got some model of the interior security coverage that SB 53 requires. OpenAI, Google DeepMind, and Anthropic repeatedly publish security reviews for his or her fashions. Nevertheless, these firms will not be certain by anybody however themselves, so typically they fall behind their self-imposed security commitments. SB 53 goals to set these necessities as state legislation, with monetary repercussions if an AI lab fails to conform.
Earlier in September, California lawmakers amended SB 53 to take away a bit of the invoice that will have required AI mannequin builders to be audited by third events. Tech firms have beforehand fought all these third-party audits in different AI coverage battles, arguing that they’re overly burdensome.