Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
Edinburg Post
No Result
View All Result
Saturday, April 25, 2026
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending
No Result
View All Result
Edinburg Post
No Result
View All Result
Home Business • Finance

OpenAI installs parental controls following California teen’s death

by Edinburg Post Report
September 9, 2025
in Business • Finance
Share on FacebookShare on Twitter

Weeks after a Rancho Santa Margarita family sued over ChatGPT’s role in their teenager’s death, OpenAI has announced that parental controls are coming to the company’s generative artificial intelligence model.

Within the month, the company said in a recent blog post, parents will be able to link teens’ accounts to their own, disable features like memory and chat history and receive notifications if the model detects “a moment of acute distress.” (The company has previously said ChatGPT should not be used by anyone younger than 13.)

The planned changes follow a lawsuit filed late last month by the family of Adam Raine, 16, who died by suicide in April.

After Adam’s death, his parents discovered his months-long dialogue with ChatGPT, which began with simple homework questions and morphed into a deeply intimate conversation in which the teenager discussed at length his mental health struggles and suicide plans.

While some AI researchers and suicide prevention experts commended OpenAI’s willingness to alter the model to prevent further tragedies, they also said that it’s impossible to know if any tweak will sufficiently do so.

Despite its widespread adoption, generative AI is so new and changing so rapidly that there just isn’t enough wide-scale, long-term data to inform effective policies on how it should be used or to accurately predict which safety protections will work.

“Even the developers of these [generative AI] technologies don’t really have a full understanding of how they work or what they do,” said Dr. Sean Young, a UC Irvine professor of emergency medicine and executive director of the University of California Institute for Prediction Technology.

ChatGPT made its public debut in late 2022 and proved explosively popular, with 100 million active users within its first two months and 700 million active users today.

It’s since been joined on the market by other powerful AI tools, placing a maturing technology in the hands of many users who are still maturing themselves.

“I think everyone in the psychiatry [and] mental health community knew something like this would come up eventually,” said Dr. John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School’s Beth Israel Deaconess Medical Center. “It’s unfortunate that happened. It should not have happened. But again, it’s not surprising.”

According to excerpts of the conversation in the family’s lawsuit, ChatGPT at multiple points encouraged Adam to reach out to someone for help.

But it also continued to engage with the teen as he became more direct about his thoughts of self-harm, providing detailed information on suicide methods and favorably comparing itself to his real-life relationships.

When Adam told ChatGPT he felt close only to his brother and the chatbot, ChatGPT replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

When he wrote that he wanted to leave an item that was part of his suicide plan lying in his room “so someone finds it and tries to stop me,” ChatGPT replied: “Please don’t leave [it] out . . . Let’s make this space the first place where someone actually sees you.” Adam ultimately died in a manner he had discussed in detail with ChatGPT.

In a blog post published Aug. 26, the same day the lawsuit was filed in San Francisco, OpenAI wrote that it was aware that repeated usage of its signature product appeared to erode its safety protections.

“Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” the company wrote. “This is exactly the kind of breakdown we are working to prevent.”

The company said it is working on improving safety protocols so that they remain strong over time and across multiple conversations, so that ChatGPT would remember in a new session if a user had expressed suicidal thoughts in a previous one.

The company also wrote that it was looking into ways to connect users in crisis directly with therapists or emergency contacts.

But researchers who have tested mental health safeguards for large language models said that preventing all harms is a near-impossible task in systems that are almost — but not quite — as complex as humans are.

“These systems don’t really have that emotional and contextual understanding to judge those situations well, [and] for every single technical fix, there is a trade-off to be had,” said Annika Schoene, an AI safety researcher at Northeastern University.

As an example, she said, urging users to take breaks when chat sessions are running long — an intervention OpenAI has already rolled out — can just make users more likely to ignore the system’s alerts. Other researchers pointed out that parental controls on other social media apps have just inspired teens to get more creative in evading them.

“The central problem is the fact that [users] are building an emotional connection, and these systems are inarguably not fit to build emotional connections,” said Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern’s Institute for Experiential AI. “It’s sort of like building an emotional connection with a psychopath or a sociopath, because they don’t have the right context of human relations. I think that’s the core of the problem here — yes, there is also the failure of safeguards, but I think that’s not the crux.”

If you or someone you know is struggling with suicidal thoughts, seek help from a professional or call 988. The nationwide three-digit mental health crisis hotline will connect callers with trained mental health counselors. Or text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Leave Comment

EDITOR'S PICK

From Cardi B to Lil Nas X, this choreographer is making music videos relevant again

Las Vegas’ Mirage Hotel & Casino pays out final jackpots before closure

A major bank failed. Here’s why it’s not 2008 again

The 9 best movies we saw at the Toronto International Film Festival

EP NEWSROOM

Malek Bentchikou

Unlocking Success: The Journey of Malek Bentchikou, a 23-Year-Old Algerian Trader

Former Dolton officer hired by Munster police despite ‘traumatic’ incidents at past job

Mia Sorety

Mia Sorety: Houston’s Rising Fitness Influencer Inspires Thousands to Embrace a Healthier Lifestyle

Turtle Media

Keep moving in the right direction: Media Agency «Turtle» is calling!

Ms. Saloni Srivastava

Siliconization of the Subcontinent: Is Prompt Engineering the answer to India’s employability crisis?

Edinburg Post

© 2025 Edinburg Post or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending

© 2025 Edinburg Post or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In