When 14-year-old Sewell Setzer III died by suicide in February, his mother, Megan Garcia, pinned the blame on the popular website Character AI for his passing.
In October, the grieving mother filed a lawsuit against the chatbot site owned by Google. Released to the public in 2022, statistics from January show that the website is visited daily by approximately 3.5 million users, ranging in age from 16 to 30.
The website has faced previous controversies, but this story put the company under heavy scrutiny. The company now enforces its disclaimers about AI bots being fictional characters more heavily and has since made its filter system stricter to block sensitive content. If a user alludes to suicidal ideation in a text conversation with a bot, the bot’s response is scrapped automatically and displays a message directing users to the National Suicide Prevention Lifeline.
The debate around the effects of technology on youth behavior and mental health is not a new one. For years, parents have targeted tech companies for the chokehold video games and social media have had on youth. This begs the question: Was AI solely to blame here, or are there more factors at play?
This tragic incident raises questions about parental involvement in their children’s usage of the internet.
According to AP, Garcia said she was unaware of the nature of the app her son was using and the fact he fell for a chatbot. She misunderstood it as a videogame of sorts. She noticed his increasingly withdrawn behavior. In response, her son’s phone was, on occasion, taken away as punishment and his screen time restricted. They also attempted to put him into therapy, however, he only went to five sessions. Sewell was diagnosed with an anxiety disorder and disruptive mood regulation, and previously, mild Asperger’s Syndrome. Despite Sewell knowing that the chatbot he talked to for hours daily was not a real person, he felt more comfortable talking to it than an actual therapist. Another point to note is that Sewell used his stepfather’s handgun to end his own life.
The situation is more nuanced than “AI is evil and ruining society.” Real-world problems and mental health issues lead to escapism, and this escapism can be in the form of books, TV shows and now AI. Parents must be more vigilant of the kind of content that their children consume on a widely unregulated internet. Limiting screen time and taking away devices is a temporary Band-Aid on inappropriate internet usage. The line between internet safety and internet censorship has always been a matter of controversy. On a personal level, however, parents can and should exert their role as authoritative figures to monitor what their child watches in a manner that does not breach privacy. It’s a difficult balance to achieve, but not impossible.
A point of concern is that a 14-year-old had access to a handgun. This is not the first time that a minor got hold of a firearm, but the responsibility lies on gun owners to make sure that their weapons are difficult to access by unauthorized users.
On the flip side, a company claiming that their AI chatbots can help people with loneliness and mental health issues is manipulative and predatory. Talking to lines of code can not replace real, physical human interaction.
Adolescence is an extremely important stage for growth and social development. In a controlled environment with no consequences, teens can have conversations without the risks of awkward moments and judgment. However, instead of escaping from these unfortunate parts of reality, gaining skills to navigate these situations is far more practical for the long-term and healthier. Sewell was neurodivergent, and neurodivergent individuals typically struggle more with social issues and tend to hyper fixate, which could be said for his case as well. He was hyper fixated on a character bot, and this unhealthy coping mechanism became addictive.
These chatbots should never have been marketed to a young audience in the first place. With a language model that nearly perfectly imitates human speech, the line between reality and fiction is blurred, and no tiny line of text with a disclaimer can break that immersion. A business model that banks on emotional attachment between users and bots is an unethical one, and with the billion dollars the company has raked in, the company is clearly aware of its consumer base and its psychology.
Character AI and other companionship bot sites are unlikely to disappear. If anything they will continue to advance and become more and more lifelike. Parents and lawmakers have the responsibility to regulate how teenagers interact with this technology.