California Legal professional Normal Rob Bonta and Delaware Legal professional Normal Kathy Jennings met with and despatched an open letter to OpenAI to specific their issues over the security of ChatGPT, significantly for kids and youths.
The warning comes per week after Bonta and 44 different attorneys basic despatched a letter to 12 of the highest AI firms, following reviews of sexually inappropriate interactions between AI chatbots and kids.
“For the reason that issuance of that letter, we discovered of the heartbreaking demise by suicide of 1 younger Californian after he had extended interactions with an OpenAI chatbot, in addition to a equally disturbing murder-suicide in Connecticut,” Bonta and Jennings write. “No matter safeguards have been in place didn’t work.”
The 2 state officers are at the moment investigating OpenAI’s proposed restructuring right into a for-profit entity to make sure that the mission of the nonprofit stays intact. That mission “contains guaranteeing that synthetic intelligence is deployed safely” and constructing synthetic basic intelligence (AGI) to learn all humanity, “together with youngsters,” per the letter.
“Earlier than we get to benefiting, we have to be certain that sufficient security measures are in place to not hurt,” the letter continues. “It’s our shared view that OpenAI and the business at massive aren’t the place they have to be in guaranteeing security in AI merchandise’ improvement and deployment. As Attorneys Normal, public security is considered one of our core missions. As we proceed our dialogue associated to OpenAI’s recapitalization plan, we should work to speed up and amplify security as a governing power in the way forward for this highly effective know-how.”
Bonta and Jennings have requested for extra details about OpenAI’s present security precautions and governance, and mentioned they anticipate the corporate to take instant remedial measures the place applicable.
TechCrunch has reached out to OpenAI for remark.
Techcrunch occasion
San Francisco
|
October 27-29, 2025