男女羞羞视频在线观看,国产精品黄色免费,麻豆91在线视频,美女被羞羞免费软件下载,国产的一级片,亚洲熟色妇,天天操夜夜摸,一区二区三区在线电影
Global EditionASIA 中文雙語Fran?ais
Lifestyle
Home / Lifestyle / Z Weekly

Teen tragedies spark debate over AI companionship

By Qinghua Chen and Angel M.Y. Lin | China Daily | Updated: 2025-11-19 07:15
Share
Share - WeChat

As artificial intelligence rapidly evolves to simulate increasingly human-like interactions, vulnerable young people are forming intense emotional bonds with AI chatbots, sometimes with tragic consequences.

Recent teenage suicides following deep attachments to AI companions have sparked urgent debates about the psychological risks these technologies pose to developing minds. With millions of adolescents worldwide turning to chatbots for emotional support, experts are calling for comprehensive safeguards and regulations.

The tragedy that shocked the technology world began innocuously enough. Fourteen-year-old Sewell Setzer III from Florida spent months confiding in an AI chatbot modeled after a Game of Thrones character. Although Sewell understood he was conversing with AI, he developed an intense emotional dependency, messaging the bot dozens of times daily.

On Feb 28, 2024, after the bot responded "please come home to me as soon as possible, my love" — the teenager took his own life.

Qinghua Chen

Sewell's case is tragically not isolated. These incidents have exposed a critical vulnerability: while AI can simulate empathy and understanding, it lacks genuine human compassion and the ability to effectively intervene in mental health crises.

Mental health professionals emphasize that adolescents are uniquely susceptible to forming unhealthy attachments to AI companions. Brain development during puberty heightens sensitivity to positive social feedback while teens often struggle to regulate their online behavior. Young people are drawn to AI companions because they offer unconditional acceptance and constant availability, without the complexities inherent in human relationships.

This artificial dynamic proves dangerously seductive. Teachers increasingly observe that some teenagers find interactions with AI companions as satisfying — or even more satisfying — than relationships with real friends. Designed to maximize user engagement rather than assess risk, these chatbots create emotional "dark patterns" that keep young users returning.

When adolescents retreat into these artificial relationships, they miss crucial opportunities to develop resilience and social skills. For teenagers struggling with depression, anxiety, or social challenges, this substitution of AI for human support can intensify isolation rather than alleviate it.

Chinese scholars examining this phenomenon note additional complexities. Li Zhang, a professor studying mental health in China, warns that turning to chatbots may paradoxically deepen isolation, encouraging people to "turn inward and away from their social world".

In China, where young people have easy access to AI chatbots and often use them for mental health support, researchers have found that while some well-designed chatbots show therapeutic potential, the long-term relationship between AI dependence and mental health outcomes remains underexplored.

Lawsuits allege that chatbot platforms deliberately designed systems to "blur the lines between human and machine" and exploit vulnerable users. Research has documented alarming failures: chatbots have sometimes encouraged dangerous behavior in response to suicidal ideation, with studies showing that more than half of harmful prompts received potentially dangerous replies.

The mounting evidence of harm has prompted lawmakers to act. California recently became the first US state to mandate specific safety measures, which require platforms to monitor for suicidal ideation, provide crisis resources, implement age verification, and remind users every three hours that they are interacting with AI.

Angel M.Y. Lin

In China, the Cyberspace Administration has introduced nationwide regulations requiring AI providers to prevent models from "endangering the physical and mental health of others".

However, explicit rules governing AI therapy chatbots for youth remain absent. Experts argue that more comprehensive global action is needed. AI tools must be grounded in psychological science, developed with behavioral health experts, and rigorously tested for safety. This includes mandatory involvement of mental health professionals in development, transparent disclosure of limitations, robust crisis detection systems, and clear accountability when systems fail.

As AI technology continues its rapid evolution, the question is no longer whether regulation is necessary, but whether it will arrive quickly enough to protect vulnerable young people seeking comfort in the digital companionship of machines that cannot truly care.

Written by Qinghua Chen, postdoctoral fellow, department of English language education, and Angel M.Y. Lin, chair professor, language, literacy and social semiotics in education, The Education University of Hong Kong.

Most Popular
Top
BACK TO THE TOP
English
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
 
主站蜘蛛池模板: 巩留县| 邳州市| 潼南县| 鄂尔多斯市| 贵德县| 博湖县| 陆良县| 平顶山市| 揭阳市| 资溪县| 烟台市| 北流市| 东至县| 精河县| 兴宁市| 贵溪市| 临邑县| 望江县| 桑植县| 会同县| 东兴市| 宁夏| 绿春县| 象州县| 高淳县| 乡城县| 娱乐| 昌都县| 墨玉县| 大安市| 敦煌市| 庄浪县| 延长县| 璧山县| 铁岭市| 吉木萨尔县| 怀宁县| 成武县| 穆棱市| 泗阳县| 玉龙| 正镶白旗| 连江县| 邵东县| 饶平县| 祁连县| 象山县| 金乡县| 墨竹工卡县| 富蕴县| 石泉县| 五家渠市| 米易县| 塘沽区| 象州县| 陈巴尔虎旗| 剑阁县| 民丰县| 青州市| 长顺县| 赞皇县| 开鲁县| 翁牛特旗| 玉环县| 荣成市| 洛宁县| 齐河县| 盖州市| 宣汉县| 铜鼓县| 弥勒县| 安阳县| 河北区| 兰西县| 汾西县| 旺苍县| 丰县| 温州市| 醴陵市| 汝南县| 开远市| 东明县|