(Photo: Justin Sullivan/Getty Images)
What's More Frightening Than Kari Lake? A Deepfake Kari Lake
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI
To donate by check, phone, or other method, see our More Ways to Give page.
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI
There’s something scary online involving Kari Lake — and it’s not what you might expect.
The nonprofit journalism site Arizona Agenda has a minute-long video from the TV news anchor turned GOP candidate, praising the site’s work . . . and then halfway through, revealing that it is all a deepfake. Watch it here. Especially, watch it on a phone, where the glitches are less noticeable. This is new, and unnerving, and ominous.
It is now less than two years since ChatGPT was released, and the world began to debate how much change advances in generative artificial intelligence would bring. Are they like Gutenberg’s Bible, made possible by the new technology of the printing press? Or are they yet another techno-fad, more hype than impact? Over the coming years, all this will unfold with massive repercussions for our work, healthcare, and lives. (A guarantee: This column is written by a live person, and always will be!)
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI on the information ecosystem, including through deepfakes like the one starring “Kari Lake.” (The real Lake, meanwhile, sent a cease-and-desist letter to the website.) That risk is especially high when it comes to audio, which can be easier to manipulate than visual imagery — and harder to detect as fake.
Last year the Slovak presidential election may have been tipped by fake audio of a leading candidate that went viral days before the vote. In New Hampshire, bogus robocalls from “Joe Biden” urged voters to sit out the primary. In Chicago’s mayoral election, a fake tape purported to feature a candidate musing, “In my day, no one would bat an eye” if a police officer killed 17 or 18 people. The risk of doctored audio and video makes it harder to know what is real. Donald Trump has taken to decrying any video that makes him look bad as fake.
At the Brennan Center, we worry especially about how all this might affect the nuts and bolts of election administration. Recently we held a “tabletop exercise” with Arizona Secretary of State Adrian Fontes, one of the country’s most effective public servants, and other election officials in the state. It featured a similar fake video starring Fontes, created for educational and training purposes. The verisimilitude was so unnerving that the recording was quickly locked away.
Here’s a scenario we tested out: You’re a local election official. It’s a hectic Election Day and you get a call from the secretary of state. “There’s been a court order,” she says urgently. “You need to shut down an hour early.” When local workers receive a call like that, they should take a breath and call the secretary of state’s office back. You’ll find out quickly that the call was actually a deepfake. That’s the kind of simple process that could catch the fraud before it takes root.
Government can take other steps, too. We’ve laid out many of them in a series of essays with Georgetown’s Center for Security and Emerging Technology. Often, officials need to take steps that would already make sense to protect against cyberthreats and other challenges.
There is more that needs to be done. One good step is to label AI-generated content as watermarked, making clear that AI was used to create or alter an image. Meta (aka Facebook) proudly unveiled such a system to label all content that was created with AI tools from major vendors such as OpenAI, Google, and Adobe. My colleague Larry Norden, working with a software expert, showed how easy it is to remove the watermarks from these images and circumvent Meta’s labeling scheme. It took less than 30 seconds.
So government will need to step up. Sen. Amy Klobuchar (D) of Minnesota, a leader on election security, is working with Sen. Lisa Murkowski (R) of Alaska and others to craft bills requiring campaign ads that make substantial use of generative AI to be labeled. That requires finesse, since courts will be wary of First Amendment issues. But it can be done. Such reform can’t happen fast enough.
After all, as the deepfake Kari Lake put it so well, “By the time the November election rolls around, you’ll hardly be able to tell the difference between reality and artificial intelligence.” That’s . . . intelligent.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
There’s something scary online involving Kari Lake — and it’s not what you might expect.
The nonprofit journalism site Arizona Agenda has a minute-long video from the TV news anchor turned GOP candidate, praising the site’s work . . . and then halfway through, revealing that it is all a deepfake. Watch it here. Especially, watch it on a phone, where the glitches are less noticeable. This is new, and unnerving, and ominous.
It is now less than two years since ChatGPT was released, and the world began to debate how much change advances in generative artificial intelligence would bring. Are they like Gutenberg’s Bible, made possible by the new technology of the printing press? Or are they yet another techno-fad, more hype than impact? Over the coming years, all this will unfold with massive repercussions for our work, healthcare, and lives. (A guarantee: This column is written by a live person, and always will be!)
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI on the information ecosystem, including through deepfakes like the one starring “Kari Lake.” (The real Lake, meanwhile, sent a cease-and-desist letter to the website.) That risk is especially high when it comes to audio, which can be easier to manipulate than visual imagery — and harder to detect as fake.
Last year the Slovak presidential election may have been tipped by fake audio of a leading candidate that went viral days before the vote. In New Hampshire, bogus robocalls from “Joe Biden” urged voters to sit out the primary. In Chicago’s mayoral election, a fake tape purported to feature a candidate musing, “In my day, no one would bat an eye” if a police officer killed 17 or 18 people. The risk of doctored audio and video makes it harder to know what is real. Donald Trump has taken to decrying any video that makes him look bad as fake.
At the Brennan Center, we worry especially about how all this might affect the nuts and bolts of election administration. Recently we held a “tabletop exercise” with Arizona Secretary of State Adrian Fontes, one of the country’s most effective public servants, and other election officials in the state. It featured a similar fake video starring Fontes, created for educational and training purposes. The verisimilitude was so unnerving that the recording was quickly locked away.
Here’s a scenario we tested out: You’re a local election official. It’s a hectic Election Day and you get a call from the secretary of state. “There’s been a court order,” she says urgently. “You need to shut down an hour early.” When local workers receive a call like that, they should take a breath and call the secretary of state’s office back. You’ll find out quickly that the call was actually a deepfake. That’s the kind of simple process that could catch the fraud before it takes root.
Government can take other steps, too. We’ve laid out many of them in a series of essays with Georgetown’s Center for Security and Emerging Technology. Often, officials need to take steps that would already make sense to protect against cyberthreats and other challenges.
There is more that needs to be done. One good step is to label AI-generated content as watermarked, making clear that AI was used to create or alter an image. Meta (aka Facebook) proudly unveiled such a system to label all content that was created with AI tools from major vendors such as OpenAI, Google, and Adobe. My colleague Larry Norden, working with a software expert, showed how easy it is to remove the watermarks from these images and circumvent Meta’s labeling scheme. It took less than 30 seconds.
So government will need to step up. Sen. Amy Klobuchar (D) of Minnesota, a leader on election security, is working with Sen. Lisa Murkowski (R) of Alaska and others to craft bills requiring campaign ads that make substantial use of generative AI to be labeled. That requires finesse, since courts will be wary of First Amendment issues. But it can be done. Such reform can’t happen fast enough.
After all, as the deepfake Kari Lake put it so well, “By the time the November election rolls around, you’ll hardly be able to tell the difference between reality and artificial intelligence.” That’s . . . intelligent.
There’s something scary online involving Kari Lake — and it’s not what you might expect.
The nonprofit journalism site Arizona Agenda has a minute-long video from the TV news anchor turned GOP candidate, praising the site’s work . . . and then halfway through, revealing that it is all a deepfake. Watch it here. Especially, watch it on a phone, where the glitches are less noticeable. This is new, and unnerving, and ominous.
It is now less than two years since ChatGPT was released, and the world began to debate how much change advances in generative artificial intelligence would bring. Are they like Gutenberg’s Bible, made possible by the new technology of the printing press? Or are they yet another techno-fad, more hype than impact? Over the coming years, all this will unfold with massive repercussions for our work, healthcare, and lives. (A guarantee: This column is written by a live person, and always will be!)
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI on the information ecosystem, including through deepfakes like the one starring “Kari Lake.” (The real Lake, meanwhile, sent a cease-and-desist letter to the website.) That risk is especially high when it comes to audio, which can be easier to manipulate than visual imagery — and harder to detect as fake.
Last year the Slovak presidential election may have been tipped by fake audio of a leading candidate that went viral days before the vote. In New Hampshire, bogus robocalls from “Joe Biden” urged voters to sit out the primary. In Chicago’s mayoral election, a fake tape purported to feature a candidate musing, “In my day, no one would bat an eye” if a police officer killed 17 or 18 people. The risk of doctored audio and video makes it harder to know what is real. Donald Trump has taken to decrying any video that makes him look bad as fake.
At the Brennan Center, we worry especially about how all this might affect the nuts and bolts of election administration. Recently we held a “tabletop exercise” with Arizona Secretary of State Adrian Fontes, one of the country’s most effective public servants, and other election officials in the state. It featured a similar fake video starring Fontes, created for educational and training purposes. The verisimilitude was so unnerving that the recording was quickly locked away.
Here’s a scenario we tested out: You’re a local election official. It’s a hectic Election Day and you get a call from the secretary of state. “There’s been a court order,” she says urgently. “You need to shut down an hour early.” When local workers receive a call like that, they should take a breath and call the secretary of state’s office back. You’ll find out quickly that the call was actually a deepfake. That’s the kind of simple process that could catch the fraud before it takes root.
Government can take other steps, too. We’ve laid out many of them in a series of essays with Georgetown’s Center for Security and Emerging Technology. Often, officials need to take steps that would already make sense to protect against cyberthreats and other challenges.
There is more that needs to be done. One good step is to label AI-generated content as watermarked, making clear that AI was used to create or alter an image. Meta (aka Facebook) proudly unveiled such a system to label all content that was created with AI tools from major vendors such as OpenAI, Google, and Adobe. My colleague Larry Norden, working with a software expert, showed how easy it is to remove the watermarks from these images and circumvent Meta’s labeling scheme. It took less than 30 seconds.
So government will need to step up. Sen. Amy Klobuchar (D) of Minnesota, a leader on election security, is working with Sen. Lisa Murkowski (R) of Alaska and others to craft bills requiring campaign ads that make substantial use of generative AI to be labeled. That requires finesse, since courts will be wary of First Amendment issues. But it can be done. Such reform can’t happen fast enough.
After all, as the deepfake Kari Lake put it so well, “By the time the November election rolls around, you’ll hardly be able to tell the difference between reality and artificial intelligence.” That’s . . . intelligent.