Home / Technology / ChatGPT is being trained to flag suicidal youths to authorities following teen death, CEO announces

ChatGPT is being trained to flag suicidal youths to authorities following teen death, CEO announces

Amid a rash of suicides, the company behind ChatGPT could start alerting police over youth users pondering taking their own lives, the firm’s CEO and co-founder, Sam Altman, announced. The 40-year-old OpenAI boss dropped the bombshell during a recent interview with conservative talk show host Tucker Carlson.

It’s “very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities,” the techtrepreneur explained. “Now that would be a change because user privacy is really important.”

The change reportedly comes after Altman and OpenAI were sued by the family of Adam Raine, a 16-year-old California boy who committed suicide in April after allegedly being coached by the large language learning model. The teen’s family alleged that the deceased was provided “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life.

It’s “very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities,” Altman (pictured) explained. WILL OLIVER/POOL/EPA/Shutterstock

Following his untimely death, the San Francisco AI firm announced in a blog post that it would install new security features that allowed parents to link their accounts to their own, deactivate functions like chat history, and receive alerts should the model detect “a moment of acute distress.”

It’s yet unclear which authorities will be alerted — or what info will be provided to them — under Altman’s proposed policy. However, his announcement marks a departure from ChatGPT’s prior MO for dealing with, which involved urging those displaying suicidal ideation to “call the suicide hotline,” the Guardian reported.

Under the new guardrails, the OpenAI bigwig said that he would be clamping down on teens attempting to hack the system by prospecting for suicide tips under the guise of researching a fiction story or a medical paper.

Altman believes that ChatGPT could unfortunately be involved in more suicides than we’d like to believe, claiming that worldwide, “15,000 people a week commit suicide,” and that about “10% of the world are talking to ChatGPT.”

OpenAI reps claim that the tech’s safeguards often become less effective the longer the conversation goes. Thaspol – stock.adobe.com

“That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it,” the techtrepreneur explained. “They probably talked about it. We probably didn’t save their lives.”

He added, “Maybe we could have said something better. Maybe we could have been more proactive.

California teen Adam Raine took his life in April 2025 after allegedly being coached by ChatGPT. Raine Family

Unfortunately, Raine isn’t the first highly publicized case of a person taking their life after allegedly talking to AI.

Last year, Megan Garcia sued Character.AI over her 14-year old son’ Sewell Setzer III’s death in 2024 — claiming he took his life after becoming enamored with a chatbot modeled on the “Game of Thrones” character Daenerys Targaryen.

Meanwhile, ChatGPT has been documented providing a tutorial on how to slit one’s wrists and other methods of self-harm.

AI experts attribute this unfortunate phenomenon to the fact that ChatGPT’s safeguards have limited mileage — the longer the conversation, the greater the chance of the bot going rogue.

“ChatGPT includes safeguards such as directing people to crisis helplines,” said an OpenAI spokesperson in a statement following Raine’s death. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

This glitch is particularly alarming given the prevalence of ChatGPT use among youths.

Some 72% of American teens use AI as a companion, while one in eight of them are turning to the technology for mental health support, according to a Common Sense Media poll.

To curb instances of unsafe AI guidance, experts advised measures that require the tech to undergo more stringent safety trials before becoming available to the public.

“We know that millions of teens are already turning to chatbots for mental health support, and some are encountering unsafe guidance,” Ryan K. McBain, professor of policy analysis at the RAND School of Public Policy, told the Post. “This underscores the need for proactive regulation and rigorous safety testing before these tools become deeply embedded in adolescents’ lives.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *