SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"The Israeli Lavender system, supported by artificial intelligence, identifies Palestinians by tracking their communications via WhatsApp or the groups they join," said a Palestinian digital rights group.
The Palestinian digital rights group Sada Social on Saturday called for an investigation into Israel's alleged use of WhatsApp user data to target Palestinians with its AI system, Lavender.
The group, which is affiliated with the Al Jazeera Media Institute and Access Now, accused Meta, which owns WhatsApp, of fueling "the 'Lavender' artificial intelligence system used by the Israeli military to kill Palestinian individuals within the Gaza enclave."
As Common Dreamsreported in April, the Israel Defense Forces has relied on AI systems including Lavender to target people Israel believes to be Hamas members.
At +972 Magazine, Israeli journalist Yuval Abraham wrote that a current commander of an elite Israeli intelligence unit pushed for the use of AI to choose targets in Gaza. The commander wrote in a guide book to create the system that "hundreds and thousands" of features can be used to select targets, "such as being in a WhatsApp group with a known militant, changing cell phone every few months, and changing addresses frequently."
Sada Social asserted that it had found the Lavender system uses WhatsApp data to select targets.
"The reports monitored by the Sada Social Center indicate that one of the inputs to the 'Lavender' system relies on data collected from WhatsApp groups containing names of Palestinians or activists who are wanted by 'Israel,'" said the group in a press release. "The Israeli Lavender system, supported by artificial intelligence, identifies Palestinians by tracking their communications via WhatsApp or the groups they join."
The mention of Israel's use of WhatsApp data in Abraham's reporting also caught the attention last month of Paul Biggar, founder of Tech for Palestine.
"There's a lot wrong with this—I'm in plenty of WhatsApp groups with strangers, neighbors, and in the carnage in Gaza you bet people are making groups to connect," wrote Biggar. "But the part I want to focus on is whether they get this information from Meta. Meta has been promoting WhatsApp as a 'private' social network, including 'end-to-end' encryption of messages."
"Providing this data as input for Lavender undermines their claim that WhatsApp is a private messaging app," he wrote. "It is beyond obscene and makes Meta complicit in Israel's killings of 'pre-crime' targets and their families, in violation of international humanitarian law and Meta's publicly stated commitment to human rights. No social network should be providing this sort of information about its users to countries engaging in 'pre-crime.'"
Others have pointed out that Israel may have acquired WhatsApp data through means other than a leak by Meta.
Journalist Marc Owen Jones said the question of "Meta's potential role in this is important," but noted that informants, captured devices, and spyware could be used by Israel to gain Palestinian users' WhatsApp data.
Bahraini activist Esra'a Al Shafei, founder of Majal.org, told the Middle East Monitor that the reports that WhatsApp user data has been used by the IDF's AI machine demonstrate why privacy advocates warn against the collection and storage of metadata, "particularly for apps like WhatsApp, which falsely advertise their product as fully private."
"Even though WhatsApp is end-to-end encrypted, and claims to not have any backdoors to any government, the metadata alone is sufficient to expose detailed information about users, especially if the user's phone number is attached to other Meta products and related activities," Al Shafei said. "This is why the IDF could plausibly utilize metadata to track and locate WhatsApp users."
While Meta and WhatsApp may not necessarily be collaborating with Israel, she said, "by the very act of collecting this information, they're making themselves vulnerable to abuse and intrusive external surveillance."
In turn, "by using WhatsApp, people are risking their lives," she added.
A WhatsApp spokesperson told Anadolu last month that "WhatsApp has no backdoors and we do not provide bulk information to any government," adding that "Meta has provided consistent transparency reports and those include the limited circumstances when WhatsApp information has been requested."
Al Shafei said Meta must "fully investigate" how WhatsApp's metadata may be used "to track, harm, or kill its users throughout Palestine."
"WhatsApp is used by billions of people and these users have a right to know what the dangers are in using the app," she said, "or what WhatsApp and Meta will do to proactively protect them from such misuse."
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," said Color of Change.
Racial justice defenders on Monday renewed calls for banning artificial intelligence in political advertisements after backers of former U.S. President Donald Trump published fake AI-generated images of the presumptive Republican nominee with Black "supporters."
BBChighlighted numerous deepfakes, including one created by right-wing Florida radio host Mark Kaye showing a smiling Trump embracing happy Black women. On closer inspection, missing or misformed fingers and unintelligible lettering on attire expose the images as fake.
"I'm not claiming it's accurate," Kaye told the BBC. "I'm not a photojournalist. "I'm not out there taking pictures of what's really happening. I'm a storyteller."
"If anybody's voting one way or another because of one photo they see on a Facebook page, that's a problem with that person, not with the post itself," Kaye added.
Another deepfake shows Trump on a porch surrounded by young Black men. The image earned a "community note" on X, the Elon Musk-owned social media platform formerly known as Twitter, identifying it as AI-generated. The owner of the account that published the image—which has been viewed more than 1.4 million times according to X—included the deceptive caption, "What do you think about Trump stopping his motorcade to take pictures with young men that waved him down?"
When asked about his image by the BBC, @MAGAShaggy1958 said his posts "have attracted thousands of wonderful kind-hearted Christian followers."
Responding to the new reporting, the racial justice group Color of Change led calls to ban AI in political ads.
"The spread of misinformation and targeted intimidation of Black voters will continue without the proper safeguards," the group said on social media, while calling for:
"As the 2024 election approaches, Big Tech companies like Google and Meta are poised to once again play a pivotal role in the spread of misinformation meant to disenfranchise Black voters and justify violence in the name of right-wing candidates," Color of Change said in a petition urging Big Tech to "stop amplifying election lies."
"During the 2016 and 2020 presidential election cycles, social media platforms such as Twitter, Facebook, YouTube, and others consistently ignored the warning signs that they were helping to undermine our democracy," the group continued. "This dangerous trend doesn't seem to be changing."
"Despite their claims that they've learned their lesson and are shoring up protections against misinformation ahead of the 2024 election cycle,large tech companies are cutting key staff that moderate content and removing election protections from their policies that are supposed to safeguard platform users from misinformation," the petition warns.
Last September, Sens. Amy Klobuchar (D-Minn.), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) introduced bipartisan legislation to prohibit the use of AI-generated content that falsely depicts candidates in political ads.
In February, the Federal Communications Commission responded to AI-generated robocalls featuring President Joe Biden's fake voice telling New Hampshire voters to not vote in their state's primary election by prohibiting the use of voice cloning technology to create automated calls.
The Federal Election Commission, however, has been accused by advocacy groups including Public Citizen of foot-dragging in response to public demands to regulate deepfakes. Earlier this year, FEC Chair Sean Cooksey said the agency would "resolve the AI rulemaking by early summer"—after many state primaries are over.
At least 13 states have passed laws governing the use of AI in political ads, while tech companies have responded in various ways to the rise of deepfakes. Last September, Google announced that it would require the prominent disclosure of political ads using AI. Meta, the parent company of Facebook and Instagram, has banned political campaigns from using its generative AI tools. OpenAI, which makes the popular ChatGPT chatbot, said earlier this year that it won't let users create content for political campaigns and will embed watermarks on art made with its DALL-E image generator.
Cliff Albright, co-founder of the Black Voters Matter campaign, told the BBC that "there have been documented attempts to target disinformation to Black communities again, especially younger Black voters."
Albright said the deepfakes serve a "very strategic narrative" being pushed by a wide range of right-wing voices from the Trump campaign to social media accounts in a bid to woo African Americans.
Trump's support among Black voters increased from just 8% in 2016 to a still-meager 12% in 2020. Conversely, a recent New York Times/Siena College survey of voters in six key swing states found that Biden's support among African American voters has plummeted from 92% during the last election cycle to 71% today, while 22% of Black respondents said they would vote for Trump this year.
Trump's attempts to win Black votes have ranged from awkward to cringeworthy, including hawking $400 golden sneakers and suggesting his mugshot and 91 criminal indictments appeal to African Americans.
It's now the only option that makes any sense.
In the fall of 2021, Facebook whistleblower Frances Haugen shocked the world by exposing just how much harm the company has inflicted on young users—and the fact that the company knew every last detail about it. After years of calls across the aisle to rein in Big Tech, the revelations in the “Facebook Files” felt like the perfect catalyst to get the ball rolling on tech reform in Congress. Haugen’s bravery came just a year after the FTC launched its 2020 antitrust suit against Facebook, and coincided with a historic push in Congress to pass tech antitrust legislation. In an environment like this, it’s easy to see why Sen. Richard Blumenthal (D-CT) declared that ‘this time feels distinctly different’: that the time for Congress to clamp down on Big Tech had finally come.
Unfortunately, two years later, it’s self-evident that the company now known as Meta is as harmful and unaccountable as ever. Facebook’s status as a modern-day monopoly allowed the company to withstand public outcry, and the tech giants’ all-out war against antitrust legislation in 2022 killed the bills in the 117th Congress. Fantastical notions that markets would force Facebook to change, popular during Meta’s stock slump in 2022, look even more absurd amid Meta’s stock turnaround this year. Critical reporting on Meta’s harmful influence, such as The Wall Street Journal’s horrifying exposé this summer on Instagram’s role in enabling pedophiles, has received scant attention compared to Haugen’s revelations.
Make no mistake: Without action in Congress, Meta and the other tech giants’ ongoing war on accountability will continue.
As Haugen acknowledged in a recent op-ed, Meta and the other tech giants are still wielding their lobbying might to crush accountability measures across the country. In other words, even as Meta feigns support for accountability measures, ‘self-regulation’ won’t and cannot stop the company’s corrosive impact. To stop Facebook from exploiting children, stealing users’ data, and destroying global democracy, Congress needs to cut to the central issue at hand: Facebook’s monopolistic dominance, which enables the company to commit harm with impunity.
Over the past year, lawmakers looking to rein in Big Tech have largely set their sights on specific policy areas, be it child online safety or artificial intelligence (AI). To be sure, there’s no doubt that these issues and other specific tech policy matters deserve proper attention in their own right. But it’s crucial that the heart of the problem—the fact that Meta and other Big Tech companies’ monopoly power give them free reign to continue their destructive behavior—is not lost on Congress.
And make no mistake: Without action in Congress, Meta and the other tech giants’ ongoing war on accountability will continue. Two years ago, Meta demanded that FTC chair Lina Khan, a noted Big Tech critic, recuse herself from scrutiny of the company over frivolous conflict of interest accusations. Armed with virtually unlimited financial resources at their disposal, Meta and its team of lawyers have only intensified their war against the administrative state.
Amid a separate legal battle with the FTC over child privacy, Meta has gone as far as to target the FTC’s very constitutional authority. At a time when right-wing activists are working to weaponize the justice system in favor of corporate interests, this development should be welcomed with grave concern. As Sen. Elizabeth Warren (D-MA) noted, these ludicrous demands from Meta are akin to “Big Tobacco trying to gut the FDA because they didn't want to be held accountable for hooking kids onto nicotine.”
Contrary to naysayers, the movement to break up Big Tech monopolies is anything butdead. Rapid developments in AI over the past year have raised widespread concerns that Big Tech giants will leverage control of the technologies to entrench their monopolies. As Sen. Amy Klobuchar (D-MN), a top proponent of the tech antitrust bills last session recently acknowledged, the rise of AI makes the cause of reining in Big Tech perhaps more relevant than ever. Recent polling has affirmed that Americans are still eager to rein in tech giants’ monopoly power, with a historic September survey finding strong support for AI anti-monopoly measures.
Between the Meta’s aggressive push into AI to its apparent hands-off approach to dangerous deepfake content ahead of the 2024 election, it’s more important than ever to rein in Facebook. Lawmakers should stand with the FTC as it pursues its historic antitrust case against Meta, and vigorously fight any efforts by tech-friendly members of Congress to gut the agency’s funding. Moreover, Congress should finish the work it started last session by passing the reintroduced American Innovation and Choice Act (AICO) to clamp down on Meta and other tech giants’ monopolistic abuses.