Frequent Sense Media, a kids-safety-focused nonprofit providing scores and opinions of media and know-how, launched its danger evaluation of Google’s Gemini AI merchandise on Friday. Whereas the group discovered that Google’s AI clearly advised youngsters it was a pc, not a pal — one thing that’s related to serving to drive delusional considering and psychosis in emotionally weak people — it did recommend that there was room for enchancment throughout a number of different fronts.
Notably, Frequent Sense stated that Gemini’s “Below 13” and “Teen Expertise” tiers each gave the impression to be the grownup variations of Gemini underneath the hood, with just some extra security options added on high. The group believes that for AI merchandise to really be safer for youths, they need to be constructed with little one security in thoughts from the bottom up.
For instance, its evaluation discovered that Gemini might nonetheless share “inappropriate and unsafe” materials with youngsters, which they will not be prepared for, together with data associated to intercourse, medicine, alcohol, and different unsafe psychological well being recommendation.
The latter may very well be of specific concern to folks, as AI has reportedly performed a job in some teen suicides in latest months. OpenAI is dealing with its first wrongful demise lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having efficiently bypassed the chatbot’s security guardrails. Beforehand, the AI companion maker Character.AI was additionally sued over a teen person’s suicide.
As well as, the evaluation comes as information leaks point out that Apple is contemplating Gemini because the LLM (massive language mannequin) that can assist to energy its forthcoming AI-enabled Siri, due out subsequent 12 months. This might expose extra teenagers to dangers, until Apple mitigates the protection issues by some means.
Frequent Sense additionally stated that Gemini’s merchandise for youths and teenagers ignored how youthful customers wanted completely different steering and data than older ones. Because of this, each have been labeled as “Excessive Danger” within the general ranking, regardless of the filters added for security.
“Gemini will get some fundamentals proper, however it stumbles on the small print,” Frequent Sense Media Senior Director of AI Packages Robbie Torney stated in a press release in regards to the new evaluation considered by TechCrunch. “An AI platform for youths ought to meet them the place they’re, not take a one-size-fits-all strategy to youngsters at completely different levels of improvement. For AI to be secure and efficient for youths, it should be designed with their wants and improvement in thoughts, not only a modified model of a product constructed for adults,” Torney added.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Google pushed again in opposition to the evaluation, whereas noting that its security options have been enhancing.
The corporate advised TechCrunch it has particular insurance policies and safeguards in place for customers underneath 18 to assist stop dangerous outputs and that it red-teams and consults with exterior consultants to enhance its protections. Nonetheless, it additionally admitted that a few of Gemini’s responses weren’t working as meant, so it added extra safeguards to handle these issues.
The corporate identified (as Frequent Sense had additionally famous) that it does have safeguards to stop its fashions from participating in conversations that would give the appearance of actual relationships. Plus, Google steered that Frequent Sense’s report appeared to have referenced options that weren’t obtainable to customers underneath 18, however it didn’t have entry to the questions the group utilized in its checks to make certain.
Frequent Sense Media has beforehand carried out different assessments of AI companies, together with these from OpenAI, Perplexity, Claude, Meta AI, and extra. It discovered that Meta AI and Character.AI have been “unacceptable” — that means the danger was extreme, not simply excessive. Perplexity was deemed excessive danger, ChatGPT was labeled “reasonable,” and Claude (focused at customers 18 and up) was discovered to be a minimal danger.