SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"This is the facial recognition technology nightmare scenario that we have been worried about," said one civil liberties campaigner.
Amid a Washington Post investigation and pushback from civil liberties defenders, New Orleans police recently paused their sweeping—and apparently unlawful—use without public oversight of a private network of over 200 surveillance cameras and facial recognition technology to track and arrest criminal suspects.
On Monday, the Postpublished an exposé detailing how the New Orleans Police Department (NOPD) relied on real-time facial recognition technology provided by Project NOLA, a nonprofit organization operating out of the University of New Orleans, to locate and apprehend suspects.
"Facial recognition technology poses a direct threat to the fundamental rights of every individual and has no place in our cities."
Project NOLA's website says the group "operates the largest, most cost-efficient, and successful networked [high definition] crime camera program in America, which was created in 2009 by criminologist Bryan Lagarde to help reduce crime by dramatically increasing police efficiency and citizen awareness."
The Post's Douglas MacMillan and Aaron Schaffer described Project NOLA as "a surveillance method without a known precedent in any major American city that may violate municipal guardrails around use of the technology."
As MacMillan and Schaffer reported:
Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime. New Orleans police took this technology a step further, utilizing a private network of more than 200 facial recognition cameras to watch over the streets, constantly monitoring for wanted suspects and automatically pinging officers' mobile phones through an app to convey the names and current locations of possible matches.
This, despite a 2022 municipal law
limiting police use of facial recognition. That ordinance reversed the city's earlier outright ban on the technology and was criticized by civil liberties advocates for dropping a provision that required permission from a judge or magistrate commissioner prior to use.
"This is the facial recognition technology nightmare scenario that we have been worried about," Nathan Freed Wessler, deputy director with the ACLU's Speech, Privacy, and Technology Project, told the Post. "This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public."
Since 2023, Project NOLA—which was paused last month amid the Post's investigation—has contributed to dozens of arrests. Proponents including NOPD and city officials credit the collaboration with Project NOLA for a decrease in crime in the city that had the nation's highest homicide rate as recently as 2022. Project NOLA has even been featured in the true crime series "Real Time Crime."
New Orleans Police Commissioner Anne Kirkpatrick told Project NOLA last month that its automated alerts must be shut off until she is "sure that the use of the app meets all the requirements of the law and policies."
Critics point to racial bias in facial recognition algorithms, which disproportionately misidentify racial minorities, as a particular cause for concern. According to one landmark federal study published in 2019, Black, Asian, and Native American people were up to 100 times likelier to be misidentified by facial recognition algorithms than white people.
The ACLU said in a statement that Project NOLA "supercharges the risks":
Consider Randal Reid, for example. He was wrongfully arrested based on faulty Louisiana facial recognition technology, despite never having set foot in the state. The false match cost him his freedom, his dignity, and thousands of dollars in legal fees. That misidentification happened based on a still image run through a facial recognition search in an investigation.
"We cannot ignore the real possibility of this tool being weaponized against marginalized communities, especially immigrants, activists, and others whose only crime is speaking out or challenging government policies," ACLU of Louisiana executive director Alanah Odoms said. "These individuals could be added to Project NOLA's watchlist without the public's knowledge, and with no accountability or transparency on the part of the police departments."
"Facial recognition technology poses a direct threat to the fundamental rights of every individual and has no place in our cities," Odoms asserted. "We call on the New Orleans Police Department and the city of New Orleans to halt this program indefinitely and terminate all use of live-feed facial recognition technology."
"ICE's attempt to have eyes and ears in as many places as we exist both online and offline should ring an alarm for all of us," said one campaigner.
U.S. Immigration and Customs Enforcement is seeking to hire a contractor as part of an effort to expand the monitoring of negative social media posts about the agency, its personnel, and operations, according to a report published Monday.
According toThe Intercept's Sam Biddle, ICE is citing "an increase in threats" to agents and leadership as the reason for seeking a contractor to keep tabs on the public's social media activity.
The agency said the contractor "shall provide all necessary personnel, supervision, management, equipment, materials, and services, except for those provided by the government, in support of ICE's desire to protect ICE senior leaders, personnel, and facilities via internet-based threat mitigation and monitoring services."
"These efforts include conducting vulnerability assessments and proactive threat monitoring," ICE added, explaining that the contractor will be required to provide daily and monthly status reports and immediately alert supervisors of "imminent threats."
Careful what you post: ICE is seeking private contractors to conduct social media surveillance including detection of merely "negative" sentiment about the agency's leadership, agents, and general operations theintercept.com/2025/02/11/i...
[image or embed]
— Sam Biddle (@sambiddle.com) February 11, 2025 at 9:27 AM
ICE will require the monitor to identify and report "previous social media activity which would indicate any additional threats to ICE," as well as any information indicating that individuals or groups "making threats have a proclivity for violence" and anything "indicating a potential for carrying out a threat."
According to Biddle:
It's unclear how exactly any contractor might sniff out someone's "proclivity for violence." The ICE document states only that the contractor will use "social and behavioral sciences" and "psychological profiles" to accomplish its automated threat detection.
Once flagged, the system will further scour a target's internet history and attempt to reveal their real-world position and offline identity. In addition to compiling personal information—such as the Social Security numbers and addresses of those whose posts are flagged—the contractor will also provide ICE with a "photograph, partial legal name, partial date of birth, possible city, possible work affiliations, possible school or university affiliation, and any identified possible family members or associates."
The document also requests "facial recognition capabilities that could take a photograph of a subject and search the internet to find all relevant information associated with the subject." The contract contains specific directions for targets found in other countries, implying the program would scan the domestic speech of American citizens.
"Careful what you post," Biddle warned in a social media post promoting his article.
ICE is already monitoring social media posts via contractor Giant Oak, which was hired during the first Trump administration and former Democratic President Joe Biden's term. However, "the goal of this [new] contract, ostensibly, is focused more narrowly on threats to ICE leadership, agents, facilities, and operations," according to Biddle.
Cinthya Rodriguez, an organizer with the immigrant rights group Mijente, told Biddle that "the current administration's attempt to use this technology falls within the agency's larger history of mass surveillance, which includes gathering information from personal social media accounts and retaliating against immigrant activists."
"ICE's attempt to have eyes and ears in as many places as we exist both online and offline should ring an alarm for all of us," Rodriguez added.
The search for expanded ICE social media surveillance comes as President Donald Trump's administration is carrying out what the Republican leader has promised will be the biggest mass deportation campaign in U.S. history. The U.S. Department of Homeland Security has been deporting migrants on military flights, with some deportees imprisoned at Guantánamo Bay, the notorious offshore U.S. military prison in Cuba.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska toldReuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, toldThe New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."