

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

A photo shows a frame of a video generated by a new intelligence artificial tool, dubbed "Sora," unveiled by the company OpenAI, in Paris on February 16, 2024.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, upping the regulatory stakes.
The Federal Trade Commissionproposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan told The Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
The Federal Trade Commissionproposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan told The Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.
The Federal Trade Commissionproposed a new rule on Thursday that would ban the impersonation of individuals, including with the use of artificial intelligence, or AI, technology.
The announcement came the same day that OpenAI—the company behind ChatGPT—unveiled a new tool called Sora that can generate a minute-long video from a written prompt, raising new concerns about how the technology might be abused to create deepfakes videos of real people doing or saying things they did not in fact do or say.
"Sooner or later, we need to adapt to the fact that realism is no longer a marker of authenticity," Princeton University computer science professor Arvind Narayanan told The Washington Post in response to Sora's emergence.
"Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI generated scams."
For its part, the FTC is mostly concerned about how technology can be used to fool consumers. In its announcement, the commission said that it had introduced the new rule for public comment because it had been getting a growing number of complaints about impersonation-based fraud, which has generated a "public outcry."
"Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud," the commission said.
The proposed rule comes the same day as the FTC finalized a rule giving it the ability to seek financial compensation from scammers who impersonate companies or the government and builds on that regulation.
"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," FTC Chair Lina Khan said in a statement. "Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."
The FTC also said that it wanted public comment on whether the rule should prohibit AI or other companies from knowingly allowing their products to be used by individuals who are in turn using them to commit fraud through impersonation.
Public Citizen, which has advocated for greater regulation of AI technology, welcomed the FTC's proposal.
"The FTC under Chair Kahn continues to be bold and use all the tools in their toolkit to protect consumers from emerging threats," Lisa Gilbert, executive vice president of Public Citizen, said in a statement. "Today's proposed rules to ban the use of AI tools from impersonating individuals are an important change to existing regulations and will help to protect consumers from AI-generated scams."
OpenAI's preview of Sora raises the stakes in the debate surrounding AI regulation. So far, the technology is only being made available to certain professionals in film and the visual arts for feedback, as well as "red teamers—domain experts in areas like misinformation, hateful content, and bias"—to help assess risks, OpenAI said on social media.
"We'll be taking several important safety steps ahead of making Sora available in OpenAI's products," the company said.
One major concern surrounding deepfakes is that they could be used to manipulate voters in elections, including the upcoming 2024 presidential election in the U.S. The campaign of Florida Gov. Ron DeSantis, for example, raised alarms by using false images of former President Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci in a video ad.
There are obvious errors in the Sora sample videos, as OpenAI acknowledged. Narayanan pointed out that a woman's right and left legs switch positions in a video of a Tokyo street, but also said that not every viewer might catch details like this and that the technology would likely be used to create harder-to-discredit deepfakes.
Another concern is the impact the technology could have on jobs and labor, especially in the arts. Director Michael Gracey, an expert on visual effects, told The Washington Post that the technology would likely enable a director to make an animated film on their own, instead of with a team of 100 to 200 people. The use of AI was a major sticking point in strikes by the Screen Actors Guild-American Federation of Television and Radio Artists and Writers Guild of America last year, as Oxford Internet Institute visiting policy fellow Mutale Nkonde pointed out. Nkonde told the Post she also worried about the technology being used to dramatize hateful or violent prompts.
"From a policy perspective, do we need to start thinking about ways we can protect humans that should be in the loop when it comes to these tools?" Nkonde asked.