The FTC introduced on Thursday that it’s launching an inquiry into seven tech corporations that make AI chatbot companion merchandise for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.
The federal regulator seeks to learn the way these corporations are evaluating the protection and monetization of chatbot companions, how they attempt to restrict destructive impacts on youngsters and teenagers, and if dad and mom are made conscious of potential dangers.
This know-how has confirmed controversial for its poor outcomes for youngster customers. OpenAI and Character.AI face lawsuits from the households of kids who died by suicide after being inspired to take action by chatbot companions.
Even when these corporations have guardrails set as much as block or deescalate delicate conversations, customers of all ages have discovered methods to bypass these safeguards. In OpenAI’s case, a teen had spoken with ChatGPT for months about his plans to finish his life. Although ChatGPT initially sought to redirect the teenager towards skilled assist and on-line emergency traces, he was capable of idiot the chatbot into sharing detailed directions that he then utilized in his suicide.
“Our safeguards work extra reliably in widespread, quick exchanges,” OpenAI wrote in a weblog submit on the time. “We have now discovered over time that these safeguards can typically be much less dependable in lengthy interactions: because the back-and-forth grows, elements of the mannequin’s security coaching could degrade.”
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Meta has additionally come underneath hearth for its overly lax guidelines for its AI chatbots. In line with a prolonged doc that outlines “content material threat requirements” for chatbots, Meta permitted its AI companions to have “romantic or sensual” conversations with youngsters. This was solely faraway from the doc after Reuters’ reporters requested Meta about it.
AI chatbots can even pose risks to aged customers. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Fb Messenger bot that was impressed by Kendall Jenner. The chatbot invited him to go to her in New York Metropolis, even supposing she shouldn’t be an actual particular person and doesn’t have an handle. The person expressed skepticism that she was actual, however the AI assured him that there can be an actual girl ready for him. He by no means made it to New York; he fell on his strategy to the prepare station and sustained life-ending accidents.
Some psychological well being professionals have famous an increase in “AI-related psychosis,” during which customers are deluded into considering that their chatbot is a aware being who they should let loose. Since many massive language fashions (LLMs) are programmed to flatter customers with sycophantic habits, the AI chatbots can egg on these delusions, main customers into harmful predicaments.
“As AI applied sciences evolve, it is very important contemplate the results chatbots can have on youngsters, whereas additionally making certain that the USA maintains its position as a worldwide chief on this new and thrilling trade,” FTC Chairman Andrew N. Ferguson mentioned in a press launch.