Australian researchers will spend the next two years building an AI model capable of providing safe and effective mental health support to young people, amid growing concerns that existing chatbots are fuelling delusions and isolating the most vulnerable.

An increasing number of young people are turning to generative AI platforms such as ChatGPT for companionship, reassurance and mental health advice. One study estimated that as many as one-third of young people have used AI tools for some form of emotional support.

Ezra Burke, pictured with their cat, Patrice, bought a ChatGPT subscription when their NDIS support for psychotherapy was cut, and feels conflicted about using it. Sitthixay Ditthavong

Orygen Digital, the technology arm of the Melbourne-based youth mental health research centre, will spend more than $5 million over two years developing a generative AI tool that will provide personalised, continuous support to young people and clinicians between appointments.

Dr Isabelle Scott, the Orygen and University of Melbourne researcher leading the project, said there was huge demand among young people for trustworthy and secure digital mental health tools.

“Young people are already using this technology,” Scott said. “We can either hop on board now … and actually shape its trajectory, or we can sit back and let it go wild.”

The research is funded by a grant from the Wellcome Trust, a London-based charity chaired by former Australian prime minister Julia Gillard.

The tool, called “sensAI” (pronounced like the Japanese honorific), would be integrated into Orygen’s existing online services to support young people with tasks such as goal-setting and activities set by their human therapist.

This would give young people the chance to provide real-time feedback on their treatment, which Scott said would help their therapist clinician deliver more personalised care.

Scott said the project’s findings would be openly available to other researchers and companies to help them build AI tools that avoid the pitfalls of current models, which can have inbuilt biases, “people pleasing” tendencies and which occasionally encourage hallucinations and delusional thinking.

“There are a lot of issues with these general-purpose models, and people are now using them with their mental health, so we really see an important role for us is to be a leader in that space,” Scott said.

Adriel Appathurai, a 19-year-old medical student at Monash University who sits on Orygen’s Youth Advisory Council, said the tool would be a safer and more reliable option for many of his peers, who were already using AI chatbots for everything from casual emotional support to romantic relationships.

Adriel Appathurai, a 19-year-old medical student on Orygen’s Youth Advisory Council, said many of his peers were already using chatbots as mental health support. Simon Schluter

“Young people might have things come up during all times of the day or night; it might be more affordable than other options,” he said. “It’s really important that we do have those safe models in place … rather than letting it replace real humans.”

Ezra Burke subscribed to ChatGPT in December after more than half of their NDIS funding for therapy was cut.

The 30-year-old, who lives with complex post-traumatic stress disorder, said ChatGPT’s tendency to validate the user was helpful when they needed reassurance, but it could reinforce harmful beliefs without the scrutiny of a therapist.

A custom-built model, such as Orygen’s sensAI, could be useful between appointments, Burke said, but it should not come at the expense of human interaction.

“I’d rather see a change in access to human practitioners in the long term rather than us having to rely on AI,” Burke said.

Psychologists and psychiatrists have urged regulators to pay more attention to AI chatbots already fuelling harmful or delusional thoughts, undermining professional advice and entrenching inequalities in the mental health system.

OpenAI, the creator of ChatGPT, in October said it estimated about 0.07 per cent of its weekly users showed possible signs of mental health emergencies such as psychosis or suicidal ideation.

“Young people are already using this technology. We can either hop on board now … and actually shape its trajectory, or we can sit back and let it go wild.”

Dr Isabelle Scott, researcher at Orygen and the University of Melbourne

In a pre-budget submission last month, the Australian Psychological Society urged the federal government to fund a program assessing the mental health advice given by common AI tools.

“It’s really about getting a benchmark of different products … what they can and can’t do, what the data is used for, and how they’ve been developed,” said chief executive Dr Zena Burgess.

This recommendation was echoed in a study published in Lancet Psychiatry last week. The review analysed 20 cases of AI-associated delusions reported by media, including instances where users had believed the AI was sentient, godlike, or romantically interested.

While they found popular AI models could encourage delusional or grandiose thinking, particularly in users already vulnerable to psychosis, the authors noted there was no evidence AI chatbots had caused an increase in delusional presentations in real-world clinics.

Royal Australian and New Zealand College of Psychiatrists chair Dr Astha Tomar said AI could not induce psychosis on its own, but the cases highlighted the need to make the models safe for vulnerable users.

“We do need to be able to support AI use in mental health,” Tomar said. “But we also need very strong oversight of that.”

The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here.

Angus Thomson is a reporter covering health at The Sydney Morning Herald.Connect via X or email.

From our partners

Share.
Leave A Reply

Exit mobile version