Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
Edinburg Post
No Result
View All Result
Wednesday, May 21, 2025
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending
No Result
View All Result
Edinburg Post
No Result
View All Result
Home Science • Technology

California lawmakers tackle potential dangers of AI chatbots after parents raise safety concerns

by Edinburg Post Report
April 9, 2025
in Science • Technology
Share on FacebookShare on Twitter

When her 14-year-old son took his own life after interacting with artificial intelligence chatbots, Megan Garcia turned her grief into action.

Last year, the Florida mom sued Character.AI, a platform where people can create and interact with digital characters that mimic real and fictional people.

Garcia alleged in a federal lawsuit that the platform’s chatbots harmed the mental health of her son Sewell Setzer III and the Menlo Park, Calif., company failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters.

Now Garcia is backing state legislation that aims to safeguard young people from “companion” chatbots she says “are designed to engage vulnerable users in inappropriate romantic and sexual conversations” and “encourage self-harm.”

“Over time, we will need a comprehensive regulatory framework to address all the harms, but right now, I am grateful that California is at the forefront of laying this ground,” Garcia said at a news conference on Tuesday ahead of a hearing in Sacramento to review the bill.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.


As companies move fast to advance chatbots, parents, lawmakers and child advocacy groups are worried there are not enough safeguards in place to protect young people from technology’s potential dangers.

To address the problem, state lawmakers introduced a bill that would require operators of companion chatbot platforms to remind users at least every three hours that the virtual characters aren’t human. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources.

Under Senate Bill 243, the operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements.

The legislation, which cleared the Senate Judiciary Committee, is just one way state lawmakers are trying to tackle potential risks posed by artificial intelligence as chatbots surge in popularity among young people. More than 20 million people use Character.AI every month and users have created millions of chatbots.

Lawmakers say the bill could become a national model for AI protections and some of the bill’s supporters include children’s advocacy group Common Sense Media and the American Academy of Pediatrics, California.

“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of the products. The stakes are high,” said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, at the event attended by Garcia.

But tech industry and business groups including TechNet and the California Chamber of Commerce oppose the legislation, telling lawmakers that it would impose “unnecessary and burdensome requirements on general purpose AI models.” The Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, says the legislation raises 1st Amendment issues.

“The government likely has a compelling interest in preventing suicide. But this regulation is not narrowly tailored or precise,” EFF wrote to lawmakers.

Character.AI has also surfaced 1st Amendment concerns about Garcia’s lawsuit. Its attorneys asked a federal court in January to dismiss the case, stating that a finding in the parents’ favor would violate users’ constitutional right to free speech.

Chelsea Harrison, a spokeswoman for Character.AI, said in an email the company takes user safety seriously and its goal is to provide “a space that is engaging and safe.”

“We are always working toward achieving that balance, as are many companies using AI across the industry. We welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” she said in a statement.

She cited new safety features, including a tool that allows parents to see how much time their teens are spending on the platform. The company also cited its efforts to moderate potentially harmful content and direct certain users to the National Suicide and Crisis Lifeline.

Social media companies including Snap and Facebook’s parent company Meta have also released AI chatbots within their apps to compete with OpenAI’s ChatGPT, which people use to generate text and images. While some users have used ChatGPT to get advice or complete work, some have also turned to these chatbots to play the role of a virtual boyfriend or friend.

Lawmakers are also grappling with how to define “companion chatbot.” Certain apps such as Replika and Kindroid market their services as AI companions or digital friends. The bill doesn’t apply to chatbots designed for customer service.

Padilla said during the press conference that the legislation focuses on product design that is “inherently dangerous” and is meant to protect minors.

Leave Comment

EDITOR'S PICK

Consolidated 911 service may scramble to fill positions; ‘We’re going to have to … go to the streets and hire some more staff’

Keepers of the flame: Jewish brisket tradition stays afloat at Milt’s, Chicago’s only kosher-certified barbecue joint

We asked if the Getty should move. Here’s how readers responded

Hobart Hurricanes vs Adelaide Strikers: When, Where & How To Watch BBL 2024-25 Match Live

EP NEWSROOM

Malek Bentchikou

Unlocking Success: The Journey of Malek Bentchikou, a 23-Year-Old Algerian Trader

Former Dolton officer hired by Munster police despite ‘traumatic’ incidents at past job

Ms. Saloni Srivastava

Siliconization of the Subcontinent: Is Prompt Engineering the answer to India’s employability crisis?

Mia Sorety

Mia Sorety: Houston’s Rising Fitness Influencer Inspires Thousands to Embrace a Healthier Lifestyle

Jose De La Iglesia

Exclusive Interview: An inspiring conversation with talented artist Jose De La Iglesia about his single “Jaleo” and Much More!

Edinburg Post

© 2025 Edinburg Post or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • World • Politics
  • Business • Finance
  • Culture • Entertainment
  • Health • Food
  • Lifestyle • Travel
  • Science • Technology
  • Latest • Trending

© 2025 Edinburg Post or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In