SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"AI tools have the potential to expand the NSA's surveillance dragnet more than ever before," the civil liberties group warned.
The ACLU on Thursday sued the National Security Agency in an effort to uncover how the federal body is integrating rapidly advancing artificial intelligence technology into its mass spying operations—information that the agency has kept under wraps despite the dire implications for civil liberties.
Filed in a federal court in New York, the lawsuit comes over a month after the ACLU submitted a Freedom of Information Act (FOIA) request seeking details on the kinds of AI tools the NSA is using and whether it is taking any steps to prevent large-scale privacy abuses of the kind the agency is notorious for.
The ACLU said in its new complaint that the NSA and other federal agencies have yet to release "any responsive records, notwithstanding the FOIA's requirement that agencies respond to requests within twenty working days."
"Timely disclosure of the requested records [is] vitally necessary to an informed debate about the NSA's rapid deployment of novel AI systems in its surveillance activities and the safeguards for privacy, civil rights, and civil liberties that should apply," the complaint states, asking the court for an injunction requiring the NSA to immediately process the ACLU's FOIA request.
In a blog post on Thursday, the ACLU's Shaiba Rather and Patrick Toomey noted that AI "has transformed many of the NSA's daily operations" in recent years, with the agency utilizing AI tools to "help gather information on foreign governments, augment human language processing, comb through networks for cybersecurity threats, and even monitor its own analysts as they do their jobs."
"Unfortunately, that's about all we know," the pair wrote. "As the NSA integrates AI into some of its most profound decisions, it's left us in the dark about how it uses AI and what safeguards, if any, are in place to protect everyday Americans and others around the globe whose privacy hangs in the balance."
"That's why we're suing to find out what the NSA is hiding," they added.
BREAKING: We just filed a FOIA lawsuit to find out how the NSA — one of America's biggest spy agencies — is using artificial intelligence.
These are dangerous, powerful tools and the public deserves to know how the government is using them.
— ACLU (@ACLU) April 25, 2024
The ACLU filed its lawsuit less than a week after Congress approved a massive expansion of Section 702 of the Foreign Intelligence Surveillance Act (FISA), warrantless spying authority that the NSA has heavily abused to sweep up the communications of American journalists, activists, and lawmakers.
With their newly broadened authority, the NSA and other intelligence agencies will have the power to enlist a wide range of businesses and individuals to participate in their warrantless spying operations—a potential catastrophe for privacy rights.
Rather and Toomey warned Thursday that the growing, secretive use of artificial intelligence tools has "the potential to expand the NSA's surveillance dragnet more than ever before, expose private facts about our lives through vast data-mining activities, and automate decisions that once relied on human expertise and judgment."
"The government's lack of transparency is especially concerning given the dangers that AI systems pose for people's civil rights and civil liberties," Rather and Toomey wrote. "As we've already seen in areas like law enforcement and employment, using algorithmic systems to gather and analyze intelligence can compound privacy intrusions and perpetuate discrimination."
This marks the third straight time a federal court has dismissed a case targeting the U.S. Campaign for Palestinian Rights' support for the 2018-19 Great March of Return protests in Gaza.
Free speech defenders welcomed the U.S. Supreme Court's refusal to take up a lawsuit that outlandishly claimed a civil society group provided "material support" for terrorism by advocating for Palestinian human rights.
The Supreme Court's punting of
Jewish National Fund v. U.S. Campaign for Palestinian Rights—which comes over three months into Israel's war on the Gaza Strip—marks the third consecutive time a federal court has dismissed the case, which USCPR said casts "collective activism and expression of solidarity as unlawful."
In the case's first dismissal in March 2021, a federal judge
said that the plaintiffs' argument was "to say the least, not persuasive."
USCPR executive director Ahmad Abuznaid
hailed Monday's move by the nation's highest court, reiterating the group stands for "justice for all and an end to funding genocide."
"There's no lawsuit in the world that can stop us from pushing our demands for human rights," he added. "We will remain focused on opposing Israel's genocide of the Palestinian people and pursuing justice and freedom for the Palestinian people."
According to USCPR:
At issue were USCPR's fiscal sponsorship of the Boycott National Committee and expressions of support for the rights and demands of Palestinians participating in the Great Return March, when Palestinians protested to demand respect for their right to return to the villages from which Israeli settlers expelled them in 1948.
More than 230 Palestinians including at least 46 children were
killed when Israeli forces responded to the largely peaceful demonstrations against Israel's ethnic cleansing and illegal occupation with live and "less-lethal" ammunition. Tens of thousands of Palestinians were wounded over the course of the protests, which continued for the better part of two years.
The
Jewish National Fund (JNF)—which was established in 1901 to purchase land for Jewish settler colonists in Palestine, then part of the Ottoman Empire—claimed USCPR's advocacy during the demonstrations violated a provision of the Antiterrorism and Effective Death Penalty Act of 1996, a highly contentious law signed by then-President Bill Clinton which prohibits "material support" for activities the United States considers terrorism.
"The JNF's prolonged and egregious pursuit of a fishing expedition to silence and intimidate urgent advocacy for Palestinian rights has been definitively put to rest by the Supreme Court," said Diala Shamas, a senior staff attorney at the Center for Constitutional Rights, which supported the defendants.
"Now, as the government of Israel is carrying out an unfolding genocide against Palestinians in Gaza, it is more important than ever that activists be free to speak out without fear."
"The JNF's accusations were baseless, as recognized by the district court, the court of appeals, and now confirmed by the Supreme Court," Shamas added. "Now, as the government of Israel is carrying out an unfolding genocide against Palestinians in Gaza, it is more important than ever that activists be free to speak out without fear. This is an important victory, but USCPR shouldn't have been subjected to these smears in the first place."
According to Palestinian and United Nations officials, nearly 25,300 Palestinians have been killed and around 63,000 others wounded during Israel's 108-day, U.S.-backed assault on Gaza in retaliation for the Hamas-led attacks of October 7. Another 7,000 Gazans are missing and presumed dead and buried beneath rubble.
More than 1.9 million Palestinians, or over 85% of Gaza's population, have been forcibly displaced, while medical officials say babies and children in Gaza are
starving to death due to Israel's self-described "complete siege" of the embattled enclave.
Israel's conduct in the war is the subject of a South African-led genocide case before the International Court of Justice.
Artificial intelligence could supercharge threats to civil liberties, civil rights, and privacy.
Your friends aren’t the only ones seeing your tweets on social media. The F.B.I and the Department of Homeland Security (DHS), as well as police departments around the country, are reviewing and analyzing people’s online activity. These programs are only likely to grow as generative artificial intelligence (AI) promises to re-make our online world with better, faster, and more accurate analyses of data, as well as the ability to generate humanlike text, video, and audio.
While social media can help law enforcement investigate crimes, many of these monitoring efforts reach far more broadly even before bringing AI into the mix. Programs aimed at “situational awareness,” like those run by many parts of DHS or police departments preparing for public events, tend to have few safeguards. They often veer into monitoring social and political movements, particularly those involving minority communities. For instance, DHS’s National Operations Center issued multiple bulletins on the 2020 racial justice protests. The Boston Police Department tracked posts by Black Lives Matter protesters and labeled online speech related to Muslim religious and cultural practices as “extremist” without any evidence of violence or terrorism. Nor does law enforcement limit itself to scanning public posts. The Memphis police, for example, created a fake Facebook profile to befriend and gather information from Black Lives Matter activists.
The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools...
Internal government assessments cast serious doubt on the usefulness of broad social media monitoring. In 2021, after extensive reports of the department’s overreach in monitoring racial justice protestors, the DHS General Counsel’s office reviewed the activities of agents collecting social media and other open-source information to try to identify emerging threats. It found that agents gathered material on “a broad range of general threats,” ultimately yielding “information of limited value.” The Biden administration ordered a review of the Trump-era policy requiring nearly all visa applicants to submit their social media handles to the State Department, affecting some 15 million people annually, to help in immigration vetting — a practice that the Brennan Center has sought to challenge. While the review’s results have not been made public, intelligence officials charged with conducting it concluded that collecting social media handles added “no value” to the screening process. This is consistent with earlier findings. According to a 2016 brief prepared by the Department of Homeland Security for the incoming administration, in similar programs to vet refugees, account information “did not yield clear, articulable links to national security concerns, even for those applicants who were found to pose a potential national security threat based on other security screening results.” The following year, the DHS Inspector General released an audit of these programs, finding that the department had not measured their effectiveness and rendered them an insufficient basis for future initiatives. Despite failing to prove that monitoring programs actually bolster national security, the government continues to collect, use, and retain social media data.
The pervasiveness — and problems — of social media surveillance are almost certain to be exacerbated by new AI tools, including generative models, which agencies are racing to adopt.
Generative AI will enable law enforcement to more easily use covert accounts. In the physical world, undercover informants have long raised issues, especially when they have been used to trawl communities rather than target specific criminal activities. Online undercover accounts are far easier and cheaper to create and can be used to trick people into interacting and inadvertently sharing personal information such as the name of their friends and associations. New AI tools could generate fake accounts with a sufficient range of interests and connections to look real and autonomously interact with people online, saving officer time and effort. This will supercharge the problem of effortless surveillance, which the Supreme Court has recognized may “alter the relationship between citizen and government in a way that is inimical to democratic society.” These concerns are compounded by the fact that few police departments impose restrictions on undercover account use, with many allowing officers to monitor people online without a clear rationale, documentation or supervision. The same is true for federal agencies such as DHS.
Currently, despite the hype generated by their purveyors, social media surveillance tools seem to operate on a relatively rudimentary basis. While the companies that sell them tend to be secretive about how they work, the Brennan Center’s research suggest serious shortcomings. Some popular tools do not use scientific methods for identifying relevant datasets, much less test them for bias. They often use key words and phrases to identify potential threats, which blurs the context necessary to understand whether something is in fact a threat and not, for example, someone discussing a video game. It is possible that large language models, such as ChatGPT, will advance this capability — or at least be perceived and sold as doing so — and incentivize greater use of these tools.
At the same time, any such improvements may be offset by the fact that AI is widely expected to further pollute the unreliable information environment, exacerbating problems of provenance and reliability. Social media is already suffused with inaccurate and misleading information. According to a 2018 MIT study, false political news is 70 percent more likely to be re-tweeted than truthful content on X (formerly Twitter). Bots and fake accounts — which can already mimic human behavior — are also a challenge; during the COVID-19 pandemic, bots were found to proliferate misinformation about the disease, and could just as easily spread fake information generated by AI, deceiving platform users. Generative AI makes creating false news and fake identities easier, negatively contributing to an already-polluted online information environment. Moreover, AI has a tendency to “hallucinate,” or make up information — a seemingly unfixable problem that is ubiquitous among generative AI systems.
Generative AI also exacerbates longstanding problems. The promise of better analysis does nothing to ease First Amendment issues raised by social media monitoring. Bias in algorithmic tools has long been a concern, ranging from predictive policing programs that treat Black people as suspect to content moderation practices disfavoring Muslim speech. For example, Instagram users recently found that the label terrorist was addedto their English bios if their Arabic bios included the word “Palestinian,” the Palestinian flag emoji, and the common Arabic phrase “praise be to god.”
The need to address these risks is front and center in President Biden’s AI executive order and a draft memorandum from the Office of Management and Budget that sets out standards for federal agency use of AI. The OMB memo identifies social media monitoring as a use of AI that impacts individuals’ rights, and thus requires agencies using this technology to follow critical rules for transparency, testing efficacy and mitigating bias and other risks. Unfortunately, these sensible rules do not apply to national security and intelligence uses and do not affect police departments. But they should.