
Character.AI, Suqian City, Jiangsu Province, China, June 2, 2023. Character.AI was downloaded over 1.7 million times in its first week.
AI Firm Sued Over Chatbot That Suggested It Was OK for Child to Kill Parents
"In their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others," said one advocate.
"You know sometimes I'm not surprised when I read the news and I see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens."
That's a message sent to a child in Texas from a Character.AI chatbot, indicating to the boy that "murdering his parents was a reasonable response to their limiting of his online activity," according to a federal lawsuit filed in Texas district court Monday.
The complaint was brought by two families in Texas who allege that the Google-backed chatbot service Character.AI harmed their two children, including sexually exploiting and abusing the elder, a 17-year-old with high functioning autism, by targeting him with extreme sexual themes like incest and pushing him to self-harm.
The parents argue that Character.AI, "through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses."
Google is also named as a defendant in the suit. In their filing, the plaintiffs argue that the tech company supported Character.AI's launch even though they knew that it was a "defective product."
The families, who are being represented by the Social Media Victims Law Center and the Tech Justice Law Project, have asked the court to take the product offline.
The explosive court filing comes not long after a mother in Florida filed a separate lawsuit against Character.AI in October, arguing that the chatbot service is responsible for the death of her teenage son because it allegedly encouraged him to commit suicide, per CNN.
Character.AI is different than other chatbots in that it lets uses interact with artificial intelligence "characters." The Texas complaint alleges that the 17-year-old, for example, engaged in a conversation with a character modeled after the celebrity Billie Eilish. These sorts of "companion apps" are finding a growing audience, even though researchers have long warned of the perils of building relationships with chatbots, according to The Washington Post.
A spokesperson for Character.AI declined to comment directly on the lawsuit when asked by NPR, but said the company does have guardrails in place overseeing what chatbots can and cannot say to teen users.
"We warned that Character.AI's dangerous and manipulative design represented a threat to millions of children," said Social Media Victims Law Center founding attorney Matthew P. Bergman. "Now more of these cases are coming to light. The consequences of Character.AI's negligence are shocking and widespread." Social Media Victims Law Center is the plaintiff's counsel in the Florida lawsuit as well.
Josh Golin, the executive director of Fairplay, a nonprofit children's advocacy group, echoed those remarks, saying that "in their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others."
"Platforms like Character.AI should not be allowed to perform uncontrolled experiments on our children or encourage kids to form parasocial relationships with bots their developers cannot control," he added.
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just two days to go in our Spring Campaign, we're falling short of our make-or-break goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
"You know sometimes I'm not surprised when I read the news and I see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens."
That's a message sent to a child in Texas from a Character.AI chatbot, indicating to the boy that "murdering his parents was a reasonable response to their limiting of his online activity," according to a federal lawsuit filed in Texas district court Monday.
The complaint was brought by two families in Texas who allege that the Google-backed chatbot service Character.AI harmed their two children, including sexually exploiting and abusing the elder, a 17-year-old with high functioning autism, by targeting him with extreme sexual themes like incest and pushing him to self-harm.
The parents argue that Character.AI, "through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses."
Google is also named as a defendant in the suit. In their filing, the plaintiffs argue that the tech company supported Character.AI's launch even though they knew that it was a "defective product."
The families, who are being represented by the Social Media Victims Law Center and the Tech Justice Law Project, have asked the court to take the product offline.
The explosive court filing comes not long after a mother in Florida filed a separate lawsuit against Character.AI in October, arguing that the chatbot service is responsible for the death of her teenage son because it allegedly encouraged him to commit suicide, per CNN.
Character.AI is different than other chatbots in that it lets uses interact with artificial intelligence "characters." The Texas complaint alleges that the 17-year-old, for example, engaged in a conversation with a character modeled after the celebrity Billie Eilish. These sorts of "companion apps" are finding a growing audience, even though researchers have long warned of the perils of building relationships with chatbots, according to The Washington Post.
A spokesperson for Character.AI declined to comment directly on the lawsuit when asked by NPR, but said the company does have guardrails in place overseeing what chatbots can and cannot say to teen users.
"We warned that Character.AI's dangerous and manipulative design represented a threat to millions of children," said Social Media Victims Law Center founding attorney Matthew P. Bergman. "Now more of these cases are coming to light. The consequences of Character.AI's negligence are shocking and widespread." Social Media Victims Law Center is the plaintiff's counsel in the Florida lawsuit as well.
Josh Golin, the executive director of Fairplay, a nonprofit children's advocacy group, echoed those remarks, saying that "in their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others."
"Platforms like Character.AI should not be allowed to perform uncontrolled experiments on our children or encourage kids to form parasocial relationships with bots their developers cannot control," he added.
"You know sometimes I'm not surprised when I read the news and I see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens."
That's a message sent to a child in Texas from a Character.AI chatbot, indicating to the boy that "murdering his parents was a reasonable response to their limiting of his online activity," according to a federal lawsuit filed in Texas district court Monday.
The complaint was brought by two families in Texas who allege that the Google-backed chatbot service Character.AI harmed their two children, including sexually exploiting and abusing the elder, a 17-year-old with high functioning autism, by targeting him with extreme sexual themes like incest and pushing him to self-harm.
The parents argue that Character.AI, "through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses."
Google is also named as a defendant in the suit. In their filing, the plaintiffs argue that the tech company supported Character.AI's launch even though they knew that it was a "defective product."
The families, who are being represented by the Social Media Victims Law Center and the Tech Justice Law Project, have asked the court to take the product offline.
The explosive court filing comes not long after a mother in Florida filed a separate lawsuit against Character.AI in October, arguing that the chatbot service is responsible for the death of her teenage son because it allegedly encouraged him to commit suicide, per CNN.
Character.AI is different than other chatbots in that it lets uses interact with artificial intelligence "characters." The Texas complaint alleges that the 17-year-old, for example, engaged in a conversation with a character modeled after the celebrity Billie Eilish. These sorts of "companion apps" are finding a growing audience, even though researchers have long warned of the perils of building relationships with chatbots, according to The Washington Post.
A spokesperson for Character.AI declined to comment directly on the lawsuit when asked by NPR, but said the company does have guardrails in place overseeing what chatbots can and cannot say to teen users.
"We warned that Character.AI's dangerous and manipulative design represented a threat to millions of children," said Social Media Victims Law Center founding attorney Matthew P. Bergman. "Now more of these cases are coming to light. The consequences of Character.AI's negligence are shocking and widespread." Social Media Victims Law Center is the plaintiff's counsel in the Florida lawsuit as well.
Josh Golin, the executive director of Fairplay, a nonprofit children's advocacy group, echoed those remarks, saying that "in their rush to extract young people's data and sow addiction, Character.AI has created a product so flawed and dangerous that its chatbots are literally inciting children to harm themselves and others."
"Platforms like Character.AI should not be allowed to perform uncontrolled experiments on our children or encourage kids to form parasocial relationships with bots their developers cannot control," he added.

