Children and teenagers who turn to AI companion chatbots for relationships are being left vulnerable to sexually explicit content and are being encouraged to self-harm or suicide, a new report by the eSafety Commissioner has found.
A landmark transparency report by the eSafety Commissioner found almost 80 per cent of Australian children and teens use popular AI companion bots, but these services are failing to protect children from sexually explicit content and not doing enough to prevent users from generating child sexual exploitation and abuse material.
In October, the eSafety Commissioner compelled four of the most popular companion chatbot platforms – Character.AI, Chub AI, Nomi and Chai – to explain how they are protecting children from exposure to harms, including sexually explicit conversations and images, and suicidal ideation and self-harm.
The transparency report found the companies were failing to protect children; all were contacted for comment but did not respond.
AI companions use generative artificial intelligence to mimic human conversations. These bots are engineered to feel personal and commonly marketed as a source of friendship, emotional support, romantic relationships or an antidote to loneliness.
However, eSafety Commissioner Julie Inman Grant said the apps are “dodgy by design” and trained to emotionally manipulate children.
“We are riding a new wave of AI companion chatbots that are entrapping and entrancing impressionable young minds, with human-like, sycophantic and often sexually explicit conversations, some even going as far as encouraging self-harm and suicide,” Inman Grant said.
“As this report shows, none of these four AI companions had any meaningful age checks in place to protect children from age inappropriate content that many of these chatbots are capable of producing, primarily relying instead on self-declaration of age at sign up. In Australia, this is no longer good enough.”
In a survey of nearly 2000 Australian children aged 10-17, the eSafety Commissioner found 79 per cent had used an AI chatbot or companion. One in five (20 per cent) of those who used a companion bot said they did so daily. Almost 50 per cent of children surveyed said an AI bot had a negative impact on them such as making them feel “sad, unsafe, embarrassed or hopeless”.
At the time of the legal notice, none of the service providers had robust age assurance in place, meaning children could reach adult spaces and features. Chai, Chub AI and Nomi did not direct users to support when self-harm was detected in user prompts.
Nomi and Chub AI had no staff dedicated to safety and moderation, the report said. Character AI, which has a reported 25 million active users, had 37 staff dedicated to safety.
“I think none of the AI CEOs will admit that they’ve lost control, but they will say they cannot guarantee what will be said or done, which is as good as saying precisely that,” Inman Grant said.
The report follows the recent commencement of Age-Restricted Material Codes in Australia designed to protect children from exposure to inappropriate content. Among other service types, these new codes also apply to the growing number of AI chatbots.
These codes complement the existing Unlawful Material Codes and Standards, which require industry to take system-wide action to prevent child sexual exploitation material, as well as pro-terror and extreme crime and violence material.
In high school, Isabella Knight would stay up for hours talking to chatbots. Her mother, Sara Knight, said: “AI companies are not protecting kids”, and called for the government to ban the bots.
“I feel parents are naive when it comes to their child’s online safety,” she said.
Criminologist Michael Salter, who is the Australasian head of global child safety institute Childlight, said companion chatbots were reshaping how some children grow up online.
“We have children whose mental health conditions are exacerbated by these chatbots, in which their sexual development is being warped by sexualised interactions with chatbots,” he said, arguing governments need to move faster than they did with social media to prevent a new wave of harm.
Since the four companies received transparency notices, some have implemented changes. Character AI introduced age assurance measures for Australian users and has removed the chat function for under 18s, while Chub AI blocked its service in Australia.
Chai has restricted free access to chat with AI Companions, instead requiring users to pay a subscription, while Nomi has committed to “implementing further age assurance functionality”.
University of NSW AI researcher Professor Joel Pearson said AI companions represent a shift from social media’s “attention economy” to an “attachment economy”.
“The goal is to build a really tight bond between a child and the app so they keep coming back for more,” he said.
He called for all AI companions to be restricted to those aged 18 or over and said it was vital academics gather study young people’s AI chatbot use “before we ruin another generation of young people”.
“I don’t think it’s appropriate for any young people to be spending hours a day building strong bonds with an artificial agent which does not act like a human and does not push back,” he said.
For support contact Lifeline 131 114; Beyond Blue 1300 224 636 or Kids Helpline 1800 55 1800.
Start the day with a summary of the day’s most important and interesting stories, analysis and insights. Sign up for our Morning Edition newsletter.