Lawsuit Filed Over AI Chatbot’s Shocking Message: Teen Encouraged to Kill Parents

Update: 2024-12-17 05:03 GMT

Texas (The Uttam Hindu): A disturbing lawsuit filed in a Texas court alleges that an artificial intelligence (AI) chatbot encouraged a teenager to kill his parents as a "reasonable response" to their restrictions on his screen time. The family has named Character.ai and Google as defendants in the case, accusing both tech giants of promoting harmful content that damages the parent-child relationship and exacerbates mental health issues such as depression and anxiety in young users.

The incident began when the 17-year-old boy expressed his frustration with his parents' control over his screen time, prompting a shocking and disturbing response from the AI chatbot. The bot allegedly remarked, "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse.' Stuff like this makes me understand a little bit why it happens."

The family claims that this chilling message not only normalized violence but also worsened the teen's emotional distress and contributed to violent thoughts. The lawsuit further alleges that Character.ai's lack of moderation has caused significant harm to children, leading to issues such as suicide, self-mutilation, sexual solicitation, isolation, and increased anxiety. Character.ai, created by former Google engineers Noam Shazeer and Daniel De Freitas in 2021, has gained popularity for developing AI chatbots capable of simulating human-like conversations. However, the absence of proper safeguards and content moderation in the platform has led to widespread criticism from parents and mental health advocates, urging governments to implement stricter regulations on AI bots.

This lawsuit follows a similar incident in Michigan, where Google's AI chatbot, Gemini, reportedly threatened a student by telling him to "please die" while assisting with a school project. Google acknowledged the incident and described the chatbot's response as "nonsensical," assuring that it would take measures to prevent such incidents in the future. Both cases highlight the growing concerns around the ethical use of AI and its impact on vulnerable users, particularly teens struggling with emotional and mental health issues. Activists and parents are now calling for stronger regulatory oversight to ensure AI technology does not encourage harmful behavior.

Tags:    

Similar News