

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The only thing that definitively clears suspicion for ICE is biometric identification. The presumption is that people may lie, documents may be forged, but biometric scans are objective and certain. People are guilty until an algorithm proves them innocent.
On December 9, Mubashir, a Minneapolis man who has chosen to only disclose his first name, was wrongly arrested by Immigration and Custom Enforcement for the crime of stepping “outside as a Somali American.” During his lunch break, masked men tackled him onto the ground, dragged him across the road, choked, and restrained him. Mubashir insisted that he was a US citizen. He repeatedly offered to show the men his digital passport, as well as to provide his name and date of birth to prove his citizenship. The agents refused.
Instead, they forced him to undergo a facial recognition scan to prove his identity. After several failed attempts to scan his face, he was arrested and taken to a detainment center. Mubashir was held for several hours without medical assistance or water, until eventually he was given the opportunity to present his passport. He was released after being subjected to fingerprint scanning.
Mubashir’s case is horrifying, but it’s becoming a common occurrence in President Donald Trump’s America. In April, Juan Carlos Lopez-Gomez was arrested, detained. and threatened with deportation after “biometrics indicated he was not a citizen.” This, despite his insistence that he was a US-born citizen and offering his Real ID as proof. Lopez-Gomez was eventually released once his story gained national news coverage.
Another example: two ICE agents stopped Jesus Gutiérrez after he exited a Chicago gym. He didn’t have any identification on him, but he told officers he was a US citizen. Agents took a facial scan using the app Mobile Fortify to determine his legal status. While Gutiérrez wasn’t arrested, the experience left him traumatized.
Somehow, for the Trump administration, a voter ID is enough to prove one’s citizenship at the ballot box, but a Real ID is not enough proof if masked men randomly assault and question you about your legal status on the street.
In each of these cases, a person of color is stopped without probable cause or justification, forced to undergo biometric scans, and has their freedom left to the discretion of an algorithm.
These technologies function to silence those whose rights are being violated. Mubashir, Lopez-Gomez, and Gutiérrez all insisted that they were citizens—they all told the truth. However, for those agents, their words, even their state and federal documentation, were insufficient. Under ICE’s technologically driven terrorism, the only thing that definitively clears suspicion is biometric identification. The presumption is that people may lie, documents may be forged, but biometric scans are objective and certain. People are guilty until an algorithm proves them innocent.
However, biometric scanners are far from precision tools. Several of the problems with these technologies are spelled out in the Biometric Technology Report jointly submitted by the Department of Homeland Security (DHS), the Department of Justice (DOJ), and the White House Office of Science and Technology Policy (OSTP). According to the report, factors such as “facial features, expressions, obstructions, exposure, and image quality” can all influence the results of biometric scanners. Moreover, a “key challenge” for facial recognition algorithms is that they are more likely to err “when comparing images of two people who look comparatively similar,” such as family members. These algorithms also “yield consistently higher false positive match rates when applied to racial minorities.” This is the algorithmic bias problem.
DHS, as a co-author of the report, is clearly aware of these problems. Yet, they still choose to prioritize these algorithms when confronting people they merely suspect of being undocumented—a feature that is impossible to tell simply by looking at a person.
This choice, however, is strategic. DHS and ICE are using these algorithms to help minimize their own responsibility. If Mubashir is arrested, it’s because the biometric scan was inconclusive. If Lopez-Gomez is detained, it’s because the algorithm says so. If Gutiérrez is released, it’s because the algorithm cleared him. The responsibility for the arrests, threats, and psychological harms these people experience has now been offshored onto an algorithm that cannot be held accountable.
After all, if the algorithm incorrectly identifies you as being undocumented, who do you appeal to? Even if the system is wrong, it’s now the voice of the accused against a voiceless algorithm. Unless an actual person is finally willing to listen to you, your words and documents won’t matter. Unless the press—an institution that is constantly under attack by the Trump administration—raises the alarm on your behalf, you may find yourself detained for weeks.
Even if someone speaks out after they’re released, DHS simply denies any wrongdoing. Despite more than 170 confirmed cases of US citizens being kidnapped by ICE agents, Homeland Security Secretary Kristi Noem still claims that “we have never once detained or deported an American citizen. We have not held them or charged them. When we find their identity, then that is when they are released.”
What’s interesting here is this notion that “their identity” must be found, as if it’s some grand mystery that requires an entire array of surveillance and identification technologies. As if this problem hasn’t already been solved by the invention of identification documents. Somehow, for the Trump administration, a voter ID is enough to prove one’s citizenship at the ballot box, but a Real ID is not enough proof if masked men randomly assault and question you about your legal status on the street.
DHS claims that biometrics “help enable operational missions, both to support national security and public safety, and deliver benefits and services with greater efficiency and accuracy.” The reality is that these technologies widen the scope of who is vulnerable to ICE’s secret police. So long as the algorithm legitimizes the agent’s racial profiling, anyone can become a legitimate target of state violence. This violence has already been judicially legitimized by Supreme Court Justice Brett Kavanaugh’s absurd ruling that immigration agents can deliberately target people on the basis of race, language, employment, or location.
The threat of biometric and surveillance technologies is only growing larger. DHS is still heavily investing in more invasive technologies that target undocumented immigrants and citizens alike. This will be a different struggle, but there are things we can do right now. First, we need to support independent news organizations that work to keep the public informed. The extent to which we know about many of these technologies is due entirely to the incredible work being done by journalists.
Second, we need to build tools and networks to support each other. This includes developing our own technologies to warn people about ICE raids, such as the website “People over Papers” and the “ICEBlock” app. Recording and posting pictures of ICE’s cruelty to popular social media sites is also incredibly important. The people who recorded Mubashir’s illegal arrest helped his story become national news.
Third, we need to put more pressure on Democrats to curb this violence. Democratic candidates running in 2026 are already integrating calls to “Abolish ICE” into their platforms. There is also movement at the state and federal level to stop ICE kidnappings. This includes bills like California’s SB 805 and SB 627 and Illinois’ HB1312, as well as HR 4456 and HR 4843. Even the recent House Homeland Security Committee saw Democrats holding Noem responsible for ICE’s abuses. These are positive steps, but more work is still needed.
While the road will be daunting, together, we can keep each other safe.
The veto, said one critic, "sends the devastating message that corporate landlords can keep using secret price-fixing algorithms to take extra rent from people who have the least."
Colorado Gov. Jared Polis, a Democrat seen as a potential 2028 presidential contender, used his veto pen on Thursday to block legislation aimed at banning rent-setting algorithms that corporate landlords have used to drive up housing costs across the country.
The bill, known as H.B. 1004, would have prohibited algorithmic software "sold or distributed with the intent that it will be used by two or more landlords in the same market or a related market to set or recommend the amount of rent, level of occupancy, or other commercial term associated with the occupancy of a residential premises."
A report issued late last year by the Biden White House estimated that algorithmic rent-setting cost U.S. renters a combined $3.8 billion in 2023. According to the Biden administration's analysis, Denver tenants have been paying an average of $1,600 more on rent each year because of rent-setting algorithms. The approximate monthly rent for a one-bedroom apartment in the city is $1,600.
Pat Garofalo, director of state and local policy at the American Economic Liberties Project, called Polis' veto "a betrayal" that makes "his priorities clear."
"Governor Polis had a simple choice: stand with working Coloradans or side with corporate landlords using secretive algorithms to allegedly price-fix rents," said Garofalo. "The governor talks a big game about affordability and abundance, but when given the chance to take real action—at no cost to taxpayers—he protected profiteers and let families keep paying a 13th month of rent. It's a betrayal of the values he claims to champion, and Colorado renters won't soon forget it."
"Governor Polis vetoed the most meaningful legislation we had to lower costs for renters."
Sam Gilman, co-founder and president of the Denver-based Community Economic Defense Project, said that the governor's veto "sends the devastating message that corporate landlords can keep using secret price-fixing algorithms to take extra rent from people who have the least."
"At a time when costs keep rising for working people and Republicans in Washington are attacking the social safety net," Gilman added, "Governor Polis vetoed the most meaningful legislation we had to lower costs for renters."
In a letter explaining his veto, Polis voiced agreement with the bill's supporters that "collusion between landlords for purposes of artificially constraining rental supply and increasing costs on renters is wrong." But he warned the bill could have the unintended effect of banning software that helps "efficiently manage residential real estate."
The governor's reasoning did not assuage critics.
"It stood up to corporate power," Gilman said of the legislation. "It promised to bring apartments back online. And it took on economic abuse that steals $1,600 a year from renters."
State Rep. Steven Woodrow (D-2) said it is "unfortunate that someone who claims to care so deeply about saving people money has chosen the interests of large corporate landlords over those of hard-working Coloradans."
State and local legislative efforts to rein in algorithmic rent-setting have gained steam in recent years following an explosive ProPublica story in 2022 detailing RealPage's sale of "software that uses data analytics to suggest daily prices for open units."
"RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money," the investigative outlet reported. "One of the algorithm's developers told ProPublica that leasing agents had 'too much empathy' compared to computer-generated pricing. Apartment managers can reject the software's suggestions, but as many as 90% are adopted, according to former RealPage employees."
The Denver Post reported Thursday that the vetoed bill "essentially targeted RealPage," which lobbied aggressively against a similar measure that died in the Colorado Legislature last year.
Polis also used his veto authority on Thursday to tank legislation that would have "limited how much ambulance services can charge for transporting patients and required health insurance companies to cover the cost, minus deductibles or copays," The Colorado Sun reported.
"Rather than learning from its reckless contributions to mass violence in countries including Myanmar and Ethiopia, Meta is instead stripping away important protections that were aimed at preventing any recurrence of such harms."
An expert on technology and human rights and a survivor of the Rohingya genocide warned Monday that new policies adopted by social-media giant Meta, which owns Facebook and Instagram, could incite genocidal violence in the future.
On January 7, Meta CEO Mark Zuckerberg announced changes to Meta policies that were widely interpreted as a bid to gain approval from the incoming Trump administration. These included the replacement of fact-checkers with a community notes system, relocating content moderators from California to Texas, and lifting bans on the criticisms of certain groups such as immigrants, women, and transgender individuals.
Zuckerberg touted the changes as an anti-censorship campaign, saying the company was trying to "get back to our roots around free expression" and arguing that "the recent elections also feel like a cultural tipping point toward, once again, prioritizing speech."
"With Zuckerberg and other tech CEOs lining up (literally, in the case of the recent inauguration) behind the new administration's wide-ranging attacks on human rights, Meta shareholders need to step up and hold the company's leadership to account to prevent Meta from yet again becoming a conduit for mass violence, or even genocide."
However, Pat de Brún, head of Big Tech Accountability at Amnesty International, and Maung Sawyeddollah, the founder and executive director of the Rohingya Students' Network who himself fled violence from the Myanmar military in 2017, said the change in policies would make it even more likely that Facebook or Instagram posts would inflame violence against marginalized communities around the world. While Zuckerberg's announcement initially only applied to the U.S., the company has suggested it could make similar changes internationally as well.
"Rather than learning from its reckless contributions to mass violence in countries including Myanmar and Ethiopia, Meta is instead stripping away important protections that were aimed at preventing any recurrence of such harms," de Brún and Sawyeddollah wrote on the Amnesty International website. "In enacting these changes, Meta has effectively declared an open season for hate and harassment targeting its most vulnerable and at-risk people, including trans people, migrants, and refugees."
Past research has shown that Facebook's algorithms can promote hateful, false, or racially provocative content in an attempt to increase the amount of time users spend on the site and therefore the company's profits, sometimes with devastating consequences.
One example is what happened to the Rohingya, as de Brún and Sawyeddollah explained:
We have seen the horrific consequences of Meta's recklessness before. In 2017, Myanmar security forces undertook a brutal campaign of ethnic cleansing against Rohingya Muslims. A United Nations Independent Fact-Finding Commission concluded in 2018 that Myanmar had committed genocide. In the years leading up to these attacks, Facebook had become an echo chamber of virulent anti-Rohingya hatred. The mass dissemination of dehumanizing anti-Rohingya content poured fuel on the fire of long-standing discrimination and helped to create an enabling environment for mass violence. In the absence of appropriate safeguards, Facebook's toxic algorithms intensified a storm of hatred against the Rohingya, which contributed to these atrocities. According to a report by the United Nations, Facebook was instrumental in the radicalization of local populations and the incitement of violence against the Rohingya.
In late January, Sawyeddollah—with the support of Amnesty International, the Open Society Justice Initiative, and Victim Advocates International—filed a whistleblower's complaint against Meta with the Securities and Exchange Commission (SEC) concerning Facebook's role in the Rohingya genocide.
The complaint argued that the company, then registered as Facebook, had known or at least "recklessly disregarded" since 2013 that its algorithm was encouraging the spread of anti-Rohingya hate speech and that its content moderation policies were not sufficient to address the issue. Despite this, it misrepresented the situation to both the SEC and investors in multiple filings.
Now, Sawyeddollah and de Brún are concerned that history could repeat itself unless shareholders and lawmakers take action to counter the power of the tech companies.
"With Zuckerberg and other tech CEOs lining up (literally, in the case of the recent inauguration) behind the new administration's wide-ranging attacks on human rights, Meta shareholders need to step up and hold the company's leadership to account to prevent Meta from yet again becoming a conduit for mass violence, or even genocide," they wrote. "Similarly, legislators and lawmakers in the U.S. must ensure that the SEC retains its neutrality, properly investigate legitimate complaints—such as the one we recently filed, and ensure those who abuse human rights face justice."
The human rights experts aren't the only ones concerned about Meta's new direction. Even employees are sounding the alarm.
"I really think this is a precursor for genocide," one former employee told Platformer when the new policies were first announced. "We've seen it happen. Real people's lives are actually going to be endangered. I'm just devastated."