The Progressive

NewsWire

A project of Common Dreams

For Immediate Release
Contact:

David Rosen, drosen@citizen.org

Human-Like A.I. Is Deceptive and Dangerous

Tech companies are developing and deploying artificial intelligence (A.I.) systems that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines, according to a new report from Public Citizen released today. A.I. that mimics human behavior poses a wide array of unprecedented risks that require immediate action from regulators as well as new laws and regulations, the report found.

“The tech sector is recklessly rolling out A.I. systems masquerading as people that can hijack our attention, exploit our trust, and manipulate our emotions,” said Rick Claypool, a researcher for Public Citizen and author of the report. “Already Big Businesses and bad actors can’t resist using these fake humans to manipulate consumers. Lawmakers and regulators must step up and confront this threat before it’s too late.”

Deceptive anthropomorphic design elements highlighted in the report are fooling people into falsely believing A.I. systems possess consciousness, understanding, and sentience. These features range from A.I. using first-person pronouns, such as “I” and “me,” to expressions of emotion and opinion, to human-like avatars with faces, limbs, and bodies. Even worse, A.I. can be combined with emerging and frequently undisclosed technologies – such as facial and emotional recognition software – to hypercharge its manipulative and commercial capabilities.

Companies are unleashing anthropomorphic A.I. on audiences of millions or billions of users with little or no testing, oversight, and accountability – including in places no one expects them, like the drive-thru at fast food restaurants, sometimes without any disclosure to customers.

A.I. comes with potentially dangerous built-in advantages that put users at risk. These include an exaggerated sense of its trustworthiness and authoritativeness, its ability to extend user attention and engagement, its collection of sensitive personal information that can be exploited to influence the user, and psychological entangling with users by emulating emotions.

The many studies cited in the report – including marketing, technology, psychological, and legal research – show that when A.I. possesses anthropomorphic traits, it compounds all these advantages, which businesses and bad actors are already exploiting.

These design features can be removed or minimized to discourage users from conflating A.I. systems with living, breathing people. For example, an A.I. chatbot can refer to its system in the third person (“this model”) rather than the first person (“I”). Instead, tech companies are deliberately maximizing all of these features to further their business goals and boost profits.

The report concludes with policy recommendations to address the dangers and risks, including:

  1. Banning counterfeit humans in commercial transactions, both online and offline;
  2. Restricting and regulating deceptive anthropomorphizing techniques;
  3. Banning anthropomorphic A.I. from marketing to, targeting, or collecting data on kids;
  4. Banning A.I. from exploiting psychological vulnerabilities and data on users;
  5. Requiring prominent, robust, repeated reminders, disclaimers, and watermarks indicating that consumers are engaging with an A.I. A.I. systems deployed for persuasive purposes should be required to disclose their aims;
  6. Monitoring and reporting of aggregate usage information;
  7. High data security standards;
  8. Rigorous testing to meet strict safety standards;
  9. Special scrutiny and testing for all health-related A.I. systems – especially those intended for use by vulnerable populations, including children, older people, racial and ethnic minorities, psychologically vulnerable individuals, and LGBTQ+ individuals; and
  10. Severe penalties for lawbreakers, including banning them from developing and deploying A.I. systems.



Public Citizen is a nonprofit consumer advocacy organization that champions the public interest in the halls of power. We defend democracy, resist corporate power and work to ensure that government works for the people - not for big corporations. Founded in 1971, we now have 500,000 members and supporters throughout the country.

(202) 588-1000