Jun 20, 2019
Imagine, on the day before the 2020 presidential election, that someone posts a video of the Democratic candidate talking before a group of donors. The candidate admits to being ashamed to be an American, confesses that the United States is a malevolent force in the world, and promises to open borders, subordinate the country to the UN, and adopt a socialist economic system.
The video goes viral. It doesn't matter that it sounds a bit suspicious, a candidate saying such things just before the election. A very careful observer might note some discrepancies with the shadows in the background of the video or that the candidate makes some oddly uncharacteristic facial expressions.
For the average credulous viewer, however, the video reinforces some latent prejudices about Democratic Party candidates, that they never thought America was all that great to begin with and are not ultimately interested in making the country great again. And hey, didn't Mitt Romney make a similar mistake by dissing the 47 percent just before the 2012 elections?
The video spreads across social media even as the platforms try to take it down. The mainstream media publish careful proofs that the video is fabricated. It doesn't matter. Enough people in enough swing states believe the video and either switch their votes or stay home. It's not even clear where the video came from, whether it's a domestic dirty trick or a foreign agent following the Russian game plan from 2016.
Forget about October surprises. In this age of rapid dissemination of information, the most effective surprises happen in November, just before Election Day. In 2020, the election will take place on November 3. The video drops on November 2. The damage is done before damage control can even begin.
This particular surprise comes courtesy of artificial intelligence (AI). Sophisticated computer programs are now able to create "deepfake" videos that are becoming increasingly difficult to identify. In fact, as The Washington Postreports, the AI systems designed to root out such deepfake videos can't keep up with the evil geniuses that are employing other AI programs to produce them.
It's an arms race. And the bad guys are winning.
It's Already Happened Here (and There)
You've probably heard by now about the fake video of Nancy Pelosi appearing to slur her words during a speech.
On one particularly popular website, Politics Watchdog, the video received 2 million views and 45,000 shares. This video didn't require an AI program. The creator just altered the speed of Pelosi's speech and raised the pitch of her voice to disguise the manipulation. It wasn't much different from all those drunk Trump videos (also fake) that Jimmy Kimmel has broadcast on late night TV.
Or maybe you've seen the video of gun control activist Emma Gonzalez tearing up the Constitution (in reality, she was tearing up a target). Or Jordan Peele's PSA of Barack Obama saying all sorts of odd things, concluding with "stay woke, bitches." The video was meant to warn people to be skeptical of what they see on the Internet.
Elsewhere around the world, deepfakes are beginning to cause havoc.
In Gabon, the military launched an ultimately unsuccessful coup after the release of an apparently fake video of leader Ali Bongo suggested that the president was not in fact as healthy as his advisors claimed. In Malaysia, a video purported to show the economic affairs minister having sex has generated a considerable debate over whether the video was faked or not. "If it's a deepfake, it's a very good one," a digital forensics expert has said.
So far, there's been more concern than actual product. The technology is available, but it hasn't been widely weaponized. At least when it comes to the United States, that might just be a matter of timing. Next year's presidential primaries might prove to be a testing ground. Or a troll might be keeping such a weapon in reserve for an even more opportune moment, like November 2.
The Deeper Problem
Fakes have been around for ages, from the poems of Ossian to the Protocols of the Elders of Zion.
In the age of photography, the Soviet Union notoriously airbrushed out politically purged individuals from snapshots (and that, of course, was before PhotoShop). In the video age, selective editing has fooled some of the people all of the time -- as in the case of Live Action's abortion clinic videos or the misleading way that Fox News edits its clips to emphasize its ideological points. "Reality" shows on TV dramatically alter the raw footage -- not to mention staging the action to begin with.
You might think that this history would make people increasingly skeptical of what they see and hear. But Americans believe in all sorts of crazy things. One in three doesn't think that climate change is happening (and about half of Republicans deny that climate change is real). About four in ten Americans are strict creationists. One in four believes that the truth of the Sandy Hook shooting has been suppressed. Nearly one in three believes that the Mueller report exonerated Donald Trump.
The ability of pollsters to find some significant percentage of Americans who believe in one crazy proposition or another prompted the following Onion headline: "Poll: One in Five Americans Believe Obama Is a Cactus."
In ordinary times, the president doesn't give an assist to fringe theories. But Donald Trump made a political name for himself with his false claims that Barack Obama was born outside the United States. As president, he has promoted the notion that the mainstream media -- CNN, The New York Times -- publishes "fake news." He has claimed that millions of illegal votes were cast in the 2016 election, that Russia didn't interfere in that election, that the National Park Service doctored photos of the inauguration crowd, that Vince Foster and Chief Justice Antonin Scalia were murdered, that Democrats inflated the number of people killed in Hurricane Maria in Puerto Rico, and so on.
These aren't conspiracy theories, as Russel Muirhead and Nancy Rosenblum write in The Atlantic. They are simply assertions. Trump doesn't have the capability to develop an actual theory. He is not trying to explain a set of facts or data points. He is just throwing stuff out there. He is brainstorming without the benefit of a brain.
As a result of the relentless attacks on media, common sense, and reason more generally, Americans are losing the capacity to distinguish between the real and the fabricated. Case in point: nearly 63 million Americans voted for a presidential candidate in 2016 who lied repeatedly about himself, his record, and his opponent. In 2016, Americans elected a very artificial intelligence.
Adding AI
Computer scientists worry about the "singularity," the moment when artificial intelligence acquires consciousness. They are concerned that a super-intelligent entity might decide to take over the planet, enslave humans, colonize the known universe, and so on. In other words, they worry that such a creation might behave exactly like its creators.
I'm not sure why computer scientists are so anxious about a hypothetical when they should instead get riled up about the very real applications that humans are using AI for right now.
The Pentagon, for instance, developed its first AI strategy this year, saying that "it will take care to deploy the technology in accordance with the nation's values." Presumably, the Pentagon is talking about its own interpretation of the nation's values, which is far from reassuring.
Last year, the United States (and Russia) blocked a UN effort to ban "killer robots" -- weapons that don't need any human intervention, as drones do at the moment. Banning killer robots would seem to be a no-brainer. But the United States has said that it would be "premature" to regulate them. That's because the Pentagon's research arm and U.S. corporations are busy trying to establish technological hegemony by exploring ways to merge soldier and computer on the battlefield, fight the next generation of cyber-warfare, and ensure full-spectrum dominance.
Then there are the uses of AI to improve surveillance, create "predictive policing technologies," and steal your job.
Considering all these malign impacts, deepfake videos might be the least worrisome trend involving AI. Yet, in the short term, these deceptions further undermine any hope of returning to a pre-Trump moment when national conversations could be conducted on the basis of observable reality. As Jamie Bartlett writes in The Guardian, "the age of deep fakes might even succeed in making today's visceral and divided politics look like a golden age of reasonableness."
To understand this point, let's imagine a slightly different November surprise unveiled on the day before the 2020 elections.
On November 2, 2020, a video is released in which Donald Trump says that, regardless of the results of the election, he will declare himself president for life and throw anyone who disagrees into prison.
This, too, is a deepfake video created by an AI program. But Trump has said and done so many outrageous things that the public responds to this particular video with a collective shrug. #NeverTrumpers are confirmed in their assumptions about the president and vote as they intended. Trump's base dismisses the video (or secretly supports the message) and vote as planned. The few people left in the middle, inundated with four years of Trump's pronouncements, ignore the video. It's just another day in Trump's America.
AI can't be blamed for this scenario. The fault lies not in our bytes but in ourselves.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
© 2023 Foreign Policy In Focus
John Feffer
John Feffer is the author of the dystopian novel "Splinterlands" (2016) and the director of Foreign Policy In Focus at the Institute for Policy Studies. His novel, "Frostlands" (2018) is book two of his Splinterlands trilogy. Splinterlands book three "Songlands" was published in 2021. His podcast is available here.
Imagine, on the day before the 2020 presidential election, that someone posts a video of the Democratic candidate talking before a group of donors. The candidate admits to being ashamed to be an American, confesses that the United States is a malevolent force in the world, and promises to open borders, subordinate the country to the UN, and adopt a socialist economic system.
The video goes viral. It doesn't matter that it sounds a bit suspicious, a candidate saying such things just before the election. A very careful observer might note some discrepancies with the shadows in the background of the video or that the candidate makes some oddly uncharacteristic facial expressions.
For the average credulous viewer, however, the video reinforces some latent prejudices about Democratic Party candidates, that they never thought America was all that great to begin with and are not ultimately interested in making the country great again. And hey, didn't Mitt Romney make a similar mistake by dissing the 47 percent just before the 2012 elections?
The video spreads across social media even as the platforms try to take it down. The mainstream media publish careful proofs that the video is fabricated. It doesn't matter. Enough people in enough swing states believe the video and either switch their votes or stay home. It's not even clear where the video came from, whether it's a domestic dirty trick or a foreign agent following the Russian game plan from 2016.
Forget about October surprises. In this age of rapid dissemination of information, the most effective surprises happen in November, just before Election Day. In 2020, the election will take place on November 3. The video drops on November 2. The damage is done before damage control can even begin.
This particular surprise comes courtesy of artificial intelligence (AI). Sophisticated computer programs are now able to create "deepfake" videos that are becoming increasingly difficult to identify. In fact, as The Washington Postreports, the AI systems designed to root out such deepfake videos can't keep up with the evil geniuses that are employing other AI programs to produce them.
It's an arms race. And the bad guys are winning.
It's Already Happened Here (and There)
You've probably heard by now about the fake video of Nancy Pelosi appearing to slur her words during a speech.
On one particularly popular website, Politics Watchdog, the video received 2 million views and 45,000 shares. This video didn't require an AI program. The creator just altered the speed of Pelosi's speech and raised the pitch of her voice to disguise the manipulation. It wasn't much different from all those drunk Trump videos (also fake) that Jimmy Kimmel has broadcast on late night TV.
Or maybe you've seen the video of gun control activist Emma Gonzalez tearing up the Constitution (in reality, she was tearing up a target). Or Jordan Peele's PSA of Barack Obama saying all sorts of odd things, concluding with "stay woke, bitches." The video was meant to warn people to be skeptical of what they see on the Internet.
Elsewhere around the world, deepfakes are beginning to cause havoc.
In Gabon, the military launched an ultimately unsuccessful coup after the release of an apparently fake video of leader Ali Bongo suggested that the president was not in fact as healthy as his advisors claimed. In Malaysia, a video purported to show the economic affairs minister having sex has generated a considerable debate over whether the video was faked or not. "If it's a deepfake, it's a very good one," a digital forensics expert has said.
So far, there's been more concern than actual product. The technology is available, but it hasn't been widely weaponized. At least when it comes to the United States, that might just be a matter of timing. Next year's presidential primaries might prove to be a testing ground. Or a troll might be keeping such a weapon in reserve for an even more opportune moment, like November 2.
The Deeper Problem
Fakes have been around for ages, from the poems of Ossian to the Protocols of the Elders of Zion.
In the age of photography, the Soviet Union notoriously airbrushed out politically purged individuals from snapshots (and that, of course, was before PhotoShop). In the video age, selective editing has fooled some of the people all of the time -- as in the case of Live Action's abortion clinic videos or the misleading way that Fox News edits its clips to emphasize its ideological points. "Reality" shows on TV dramatically alter the raw footage -- not to mention staging the action to begin with.
You might think that this history would make people increasingly skeptical of what they see and hear. But Americans believe in all sorts of crazy things. One in three doesn't think that climate change is happening (and about half of Republicans deny that climate change is real). About four in ten Americans are strict creationists. One in four believes that the truth of the Sandy Hook shooting has been suppressed. Nearly one in three believes that the Mueller report exonerated Donald Trump.
The ability of pollsters to find some significant percentage of Americans who believe in one crazy proposition or another prompted the following Onion headline: "Poll: One in Five Americans Believe Obama Is a Cactus."
In ordinary times, the president doesn't give an assist to fringe theories. But Donald Trump made a political name for himself with his false claims that Barack Obama was born outside the United States. As president, he has promoted the notion that the mainstream media -- CNN, The New York Times -- publishes "fake news." He has claimed that millions of illegal votes were cast in the 2016 election, that Russia didn't interfere in that election, that the National Park Service doctored photos of the inauguration crowd, that Vince Foster and Chief Justice Antonin Scalia were murdered, that Democrats inflated the number of people killed in Hurricane Maria in Puerto Rico, and so on.
These aren't conspiracy theories, as Russel Muirhead and Nancy Rosenblum write in The Atlantic. They are simply assertions. Trump doesn't have the capability to develop an actual theory. He is not trying to explain a set of facts or data points. He is just throwing stuff out there. He is brainstorming without the benefit of a brain.
As a result of the relentless attacks on media, common sense, and reason more generally, Americans are losing the capacity to distinguish between the real and the fabricated. Case in point: nearly 63 million Americans voted for a presidential candidate in 2016 who lied repeatedly about himself, his record, and his opponent. In 2016, Americans elected a very artificial intelligence.
Adding AI
Computer scientists worry about the "singularity," the moment when artificial intelligence acquires consciousness. They are concerned that a super-intelligent entity might decide to take over the planet, enslave humans, colonize the known universe, and so on. In other words, they worry that such a creation might behave exactly like its creators.
I'm not sure why computer scientists are so anxious about a hypothetical when they should instead get riled up about the very real applications that humans are using AI for right now.
The Pentagon, for instance, developed its first AI strategy this year, saying that "it will take care to deploy the technology in accordance with the nation's values." Presumably, the Pentagon is talking about its own interpretation of the nation's values, which is far from reassuring.
Last year, the United States (and Russia) blocked a UN effort to ban "killer robots" -- weapons that don't need any human intervention, as drones do at the moment. Banning killer robots would seem to be a no-brainer. But the United States has said that it would be "premature" to regulate them. That's because the Pentagon's research arm and U.S. corporations are busy trying to establish technological hegemony by exploring ways to merge soldier and computer on the battlefield, fight the next generation of cyber-warfare, and ensure full-spectrum dominance.
Then there are the uses of AI to improve surveillance, create "predictive policing technologies," and steal your job.
Considering all these malign impacts, deepfake videos might be the least worrisome trend involving AI. Yet, in the short term, these deceptions further undermine any hope of returning to a pre-Trump moment when national conversations could be conducted on the basis of observable reality. As Jamie Bartlett writes in The Guardian, "the age of deep fakes might even succeed in making today's visceral and divided politics look like a golden age of reasonableness."
To understand this point, let's imagine a slightly different November surprise unveiled on the day before the 2020 elections.
On November 2, 2020, a video is released in which Donald Trump says that, regardless of the results of the election, he will declare himself president for life and throw anyone who disagrees into prison.
This, too, is a deepfake video created by an AI program. But Trump has said and done so many outrageous things that the public responds to this particular video with a collective shrug. #NeverTrumpers are confirmed in their assumptions about the president and vote as they intended. Trump's base dismisses the video (or secretly supports the message) and vote as planned. The few people left in the middle, inundated with four years of Trump's pronouncements, ignore the video. It's just another day in Trump's America.
AI can't be blamed for this scenario. The fault lies not in our bytes but in ourselves.
John Feffer
John Feffer is the author of the dystopian novel "Splinterlands" (2016) and the director of Foreign Policy In Focus at the Institute for Policy Studies. His novel, "Frostlands" (2018) is book two of his Splinterlands trilogy. Splinterlands book three "Songlands" was published in 2021. His podcast is available here.
Imagine, on the day before the 2020 presidential election, that someone posts a video of the Democratic candidate talking before a group of donors. The candidate admits to being ashamed to be an American, confesses that the United States is a malevolent force in the world, and promises to open borders, subordinate the country to the UN, and adopt a socialist economic system.
The video goes viral. It doesn't matter that it sounds a bit suspicious, a candidate saying such things just before the election. A very careful observer might note some discrepancies with the shadows in the background of the video or that the candidate makes some oddly uncharacteristic facial expressions.
For the average credulous viewer, however, the video reinforces some latent prejudices about Democratic Party candidates, that they never thought America was all that great to begin with and are not ultimately interested in making the country great again. And hey, didn't Mitt Romney make a similar mistake by dissing the 47 percent just before the 2012 elections?
The video spreads across social media even as the platforms try to take it down. The mainstream media publish careful proofs that the video is fabricated. It doesn't matter. Enough people in enough swing states believe the video and either switch their votes or stay home. It's not even clear where the video came from, whether it's a domestic dirty trick or a foreign agent following the Russian game plan from 2016.
Forget about October surprises. In this age of rapid dissemination of information, the most effective surprises happen in November, just before Election Day. In 2020, the election will take place on November 3. The video drops on November 2. The damage is done before damage control can even begin.
This particular surprise comes courtesy of artificial intelligence (AI). Sophisticated computer programs are now able to create "deepfake" videos that are becoming increasingly difficult to identify. In fact, as The Washington Postreports, the AI systems designed to root out such deepfake videos can't keep up with the evil geniuses that are employing other AI programs to produce them.
It's an arms race. And the bad guys are winning.
It's Already Happened Here (and There)
You've probably heard by now about the fake video of Nancy Pelosi appearing to slur her words during a speech.
On one particularly popular website, Politics Watchdog, the video received 2 million views and 45,000 shares. This video didn't require an AI program. The creator just altered the speed of Pelosi's speech and raised the pitch of her voice to disguise the manipulation. It wasn't much different from all those drunk Trump videos (also fake) that Jimmy Kimmel has broadcast on late night TV.
Or maybe you've seen the video of gun control activist Emma Gonzalez tearing up the Constitution (in reality, she was tearing up a target). Or Jordan Peele's PSA of Barack Obama saying all sorts of odd things, concluding with "stay woke, bitches." The video was meant to warn people to be skeptical of what they see on the Internet.
Elsewhere around the world, deepfakes are beginning to cause havoc.
In Gabon, the military launched an ultimately unsuccessful coup after the release of an apparently fake video of leader Ali Bongo suggested that the president was not in fact as healthy as his advisors claimed. In Malaysia, a video purported to show the economic affairs minister having sex has generated a considerable debate over whether the video was faked or not. "If it's a deepfake, it's a very good one," a digital forensics expert has said.
So far, there's been more concern than actual product. The technology is available, but it hasn't been widely weaponized. At least when it comes to the United States, that might just be a matter of timing. Next year's presidential primaries might prove to be a testing ground. Or a troll might be keeping such a weapon in reserve for an even more opportune moment, like November 2.
The Deeper Problem
Fakes have been around for ages, from the poems of Ossian to the Protocols of the Elders of Zion.
In the age of photography, the Soviet Union notoriously airbrushed out politically purged individuals from snapshots (and that, of course, was before PhotoShop). In the video age, selective editing has fooled some of the people all of the time -- as in the case of Live Action's abortion clinic videos or the misleading way that Fox News edits its clips to emphasize its ideological points. "Reality" shows on TV dramatically alter the raw footage -- not to mention staging the action to begin with.
You might think that this history would make people increasingly skeptical of what they see and hear. But Americans believe in all sorts of crazy things. One in three doesn't think that climate change is happening (and about half of Republicans deny that climate change is real). About four in ten Americans are strict creationists. One in four believes that the truth of the Sandy Hook shooting has been suppressed. Nearly one in three believes that the Mueller report exonerated Donald Trump.
The ability of pollsters to find some significant percentage of Americans who believe in one crazy proposition or another prompted the following Onion headline: "Poll: One in Five Americans Believe Obama Is a Cactus."
In ordinary times, the president doesn't give an assist to fringe theories. But Donald Trump made a political name for himself with his false claims that Barack Obama was born outside the United States. As president, he has promoted the notion that the mainstream media -- CNN, The New York Times -- publishes "fake news." He has claimed that millions of illegal votes were cast in the 2016 election, that Russia didn't interfere in that election, that the National Park Service doctored photos of the inauguration crowd, that Vince Foster and Chief Justice Antonin Scalia were murdered, that Democrats inflated the number of people killed in Hurricane Maria in Puerto Rico, and so on.
These aren't conspiracy theories, as Russel Muirhead and Nancy Rosenblum write in The Atlantic. They are simply assertions. Trump doesn't have the capability to develop an actual theory. He is not trying to explain a set of facts or data points. He is just throwing stuff out there. He is brainstorming without the benefit of a brain.
As a result of the relentless attacks on media, common sense, and reason more generally, Americans are losing the capacity to distinguish between the real and the fabricated. Case in point: nearly 63 million Americans voted for a presidential candidate in 2016 who lied repeatedly about himself, his record, and his opponent. In 2016, Americans elected a very artificial intelligence.
Adding AI
Computer scientists worry about the "singularity," the moment when artificial intelligence acquires consciousness. They are concerned that a super-intelligent entity might decide to take over the planet, enslave humans, colonize the known universe, and so on. In other words, they worry that such a creation might behave exactly like its creators.
I'm not sure why computer scientists are so anxious about a hypothetical when they should instead get riled up about the very real applications that humans are using AI for right now.
The Pentagon, for instance, developed its first AI strategy this year, saying that "it will take care to deploy the technology in accordance with the nation's values." Presumably, the Pentagon is talking about its own interpretation of the nation's values, which is far from reassuring.
Last year, the United States (and Russia) blocked a UN effort to ban "killer robots" -- weapons that don't need any human intervention, as drones do at the moment. Banning killer robots would seem to be a no-brainer. But the United States has said that it would be "premature" to regulate them. That's because the Pentagon's research arm and U.S. corporations are busy trying to establish technological hegemony by exploring ways to merge soldier and computer on the battlefield, fight the next generation of cyber-warfare, and ensure full-spectrum dominance.
Then there are the uses of AI to improve surveillance, create "predictive policing technologies," and steal your job.
Considering all these malign impacts, deepfake videos might be the least worrisome trend involving AI. Yet, in the short term, these deceptions further undermine any hope of returning to a pre-Trump moment when national conversations could be conducted on the basis of observable reality. As Jamie Bartlett writes in The Guardian, "the age of deep fakes might even succeed in making today's visceral and divided politics look like a golden age of reasonableness."
To understand this point, let's imagine a slightly different November surprise unveiled on the day before the 2020 elections.
On November 2, 2020, a video is released in which Donald Trump says that, regardless of the results of the election, he will declare himself president for life and throw anyone who disagrees into prison.
This, too, is a deepfake video created by an AI program. But Trump has said and done so many outrageous things that the public responds to this particular video with a collective shrug. #NeverTrumpers are confirmed in their assumptions about the president and vote as they intended. Trump's base dismisses the video (or secretly supports the message) and vote as planned. The few people left in the middle, inundated with four years of Trump's pronouncements, ignore the video. It's just another day in Trump's America.
AI can't be blamed for this scenario. The fault lies not in our bytes but in ourselves.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.