SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A growing number of experts are calling for a pause on advanced artificial intelligence development and deployment.
"Lawmakers and regulators must step up and confront this threat before it's too late," the report's author warns.
Tech companies are creating and deploying artificial intelligence systems "that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines," according to a report published Tuesday by Public Citizen.
The report—entitled Chatbots Are Not People: Designed-In Dangers of Human-Like AI Systems—asserts that "conversational artificial intelligence (AI) is among the most striking technologies to emerge from the generative AI boom kicked off by the release of OpenAI's ChatGPT. It also has the potential to be among the most dangerous."
"The subtle and not-so-subtle design choices made by the businesses behind these technologies have produced chatbots that engage well enough in fluid, spontaneous back-and-forth conversations to pose as people and to deceptively present themselves as possessing uniquely human qualities they in fact lack," the publication warns.
The report continues:
Deceptive anthropomorphic design elements... are fooling people into falsely believing AI systems possess consciousness, understanding, and sentience. These features range from AI using first-person pronouns, such as "I" and "me," to expressions of emotion and opinion, to human-like avatars with faces, limbs, and bodies. Even worse, AI can be combined with emerging and frequently undisclosed technologies—such as facial and emotional recognition software—to hypercharge its manipulative and commercial capabilities.
This, the publication says, is happening "with little or no testing, oversight, and accountability—including in places no one expects them, like the drive-thru at fast food restaurants, sometimes without any disclosure to customers."
The report contains a series of policy recommendations including:
"The tech sector is recklessly rolling out AI systems masquerading as people that can hijack our attention, exploit our trust, and manipulate our emotions," Public Citizen researcher and report author Rick Claypool said in a statement. "Already Big Businesses and bad actors can't resist using these fake humans to manipulate consumers."
"Lawmakers and regulators must step up and confront this threat before it's too late," he added.
In July, the Biden administration secured voluntary risk management commitments from seven leading AI companies, a move that was welcomed by experts—who also urged lawmakers and regulators to take further action.
A report on the dangers of AI published earlier this year by Claypool and tech accountability advocate Cheyenne Hunt urged a pause in the development of generative artificial intelligence systems "until meaningful government safeguards are in place to protect the public."
Donald Trump’s attacks on democracy, justice, and a free press are escalating — putting everything we stand for at risk. We believe a better world is possible, but we can’t get there without your support. Common Dreams stands apart. We answer only to you — our readers, activists, and changemakers — not to billionaires or corporations. Our independence allows us to cover the vital stories that others won’t, spotlighting movements for peace, equality, and human rights. Right now, our work faces unprecedented challenges. Misinformation is spreading, journalists are under attack, and financial pressures are mounting. As a reader-supported, nonprofit newsroom, your support is crucial to keep this journalism alive. Whatever you can give — $10, $25, or $100 — helps us stay strong and responsive when the world needs us most. Together, we’ll continue to build the independent, courageous journalism our movement relies on. Thank you for being part of this community. |
Tech companies are creating and deploying artificial intelligence systems "that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines," according to a report published Tuesday by Public Citizen.
The report—entitled Chatbots Are Not People: Designed-In Dangers of Human-Like AI Systems—asserts that "conversational artificial intelligence (AI) is among the most striking technologies to emerge from the generative AI boom kicked off by the release of OpenAI's ChatGPT. It also has the potential to be among the most dangerous."
"The subtle and not-so-subtle design choices made by the businesses behind these technologies have produced chatbots that engage well enough in fluid, spontaneous back-and-forth conversations to pose as people and to deceptively present themselves as possessing uniquely human qualities they in fact lack," the publication warns.
The report continues:
Deceptive anthropomorphic design elements... are fooling people into falsely believing AI systems possess consciousness, understanding, and sentience. These features range from AI using first-person pronouns, such as "I" and "me," to expressions of emotion and opinion, to human-like avatars with faces, limbs, and bodies. Even worse, AI can be combined with emerging and frequently undisclosed technologies—such as facial and emotional recognition software—to hypercharge its manipulative and commercial capabilities.
This, the publication says, is happening "with little or no testing, oversight, and accountability—including in places no one expects them, like the drive-thru at fast food restaurants, sometimes without any disclosure to customers."
The report contains a series of policy recommendations including:
"The tech sector is recklessly rolling out AI systems masquerading as people that can hijack our attention, exploit our trust, and manipulate our emotions," Public Citizen researcher and report author Rick Claypool said in a statement. "Already Big Businesses and bad actors can't resist using these fake humans to manipulate consumers."
"Lawmakers and regulators must step up and confront this threat before it's too late," he added.
In July, the Biden administration secured voluntary risk management commitments from seven leading AI companies, a move that was welcomed by experts—who also urged lawmakers and regulators to take further action.
A report on the dangers of AI published earlier this year by Claypool and tech accountability advocate Cheyenne Hunt urged a pause in the development of generative artificial intelligence systems "until meaningful government safeguards are in place to protect the public."
Tech companies are creating and deploying artificial intelligence systems "that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines," according to a report published Tuesday by Public Citizen.
The report—entitled Chatbots Are Not People: Designed-In Dangers of Human-Like AI Systems—asserts that "conversational artificial intelligence (AI) is among the most striking technologies to emerge from the generative AI boom kicked off by the release of OpenAI's ChatGPT. It also has the potential to be among the most dangerous."
"The subtle and not-so-subtle design choices made by the businesses behind these technologies have produced chatbots that engage well enough in fluid, spontaneous back-and-forth conversations to pose as people and to deceptively present themselves as possessing uniquely human qualities they in fact lack," the publication warns.
The report continues:
Deceptive anthropomorphic design elements... are fooling people into falsely believing AI systems possess consciousness, understanding, and sentience. These features range from AI using first-person pronouns, such as "I" and "me," to expressions of emotion and opinion, to human-like avatars with faces, limbs, and bodies. Even worse, AI can be combined with emerging and frequently undisclosed technologies—such as facial and emotional recognition software—to hypercharge its manipulative and commercial capabilities.
This, the publication says, is happening "with little or no testing, oversight, and accountability—including in places no one expects them, like the drive-thru at fast food restaurants, sometimes without any disclosure to customers."
The report contains a series of policy recommendations including:
"The tech sector is recklessly rolling out AI systems masquerading as people that can hijack our attention, exploit our trust, and manipulate our emotions," Public Citizen researcher and report author Rick Claypool said in a statement. "Already Big Businesses and bad actors can't resist using these fake humans to manipulate consumers."
"Lawmakers and regulators must step up and confront this threat before it's too late," he added.
In July, the Biden administration secured voluntary risk management commitments from seven leading AI companies, a move that was welcomed by experts—who also urged lawmakers and regulators to take further action.
A report on the dangers of AI published earlier this year by Claypool and tech accountability advocate Cheyenne Hunt urged a pause in the development of generative artificial intelligence systems "until meaningful government safeguards are in place to protect the public."