A Sora video still next to the OpenAI logo.

A photo shows a frame of a video generated by a new intelligence artificial tool, dubbed "Sora," unveiled by the company OpenAI, in Paris on February 16, 2024.

(Photo: Stefano RellandiniS/AFP via Getty Images)

FTC Proposes Ban on Impersonations of Individuals, Including With AI Deepfakes

The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, upping the regulatory stakes.

The Federal Trade Commission proposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.

The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.

"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan toldThe Washington Post in response to Sora's emergence.

"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."

For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."

"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.

The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.

"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."

The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.

Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.

"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."

OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.

"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.

One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.

There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.

Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The WashingtonPost that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.

"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.

Join Us: News for people demanding a better world


Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place.

We're hundreds of thousands strong, but every single supporter makes the difference.

Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. Join with us today!

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.