

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The Border Patrol is engaging in "dragnet surveillance of Americans on the streets, on the highways, in their cities, in their communities," charged one critic.
The Associated Press has exposed what it describes as a "mass surveillance network" being run by the US Border Patrol that is increasingly ensnaring US drivers who have committed no crimes.
In a report published on Thursday, the AP revealed that the Border Patrol has been using a "predictive intelligence program" that surveils and flags drivers as suspicious based solely on "where they came from, where they were going, and which route they took."
The Border Patrol then passes this information on to local law enforcement officials, who will then pull over the targeted vehicles based on flimsy pretexts such as minor speed-limit violations, having tinted windows, and even having "a dangling air freshener" that purportedly obstructs drivers' views.
From there, the drivers are subjected to aggressive questioning and vehicle searches that in some cases have resulted in arrests despite no evidence of criminal behavior on the part of the drivers.
To illustrate this, the AP told the story of Lorenzo Gutierrez Lugo, a truck driver whose work entails "transporting furniture, clothing, and other belongings to families in Mexico" across the US border.
After Gutierrez Lugo's driving routes got him flagged by the surveillance system, he was pulled over in southern Texas by local law enforcement officials, who proceeded to search his vehicle for contraband.
Although officials found no illicit goods in his truck, they nonetheless arrested him on suspicion of money laundering because he was in possession of thousands of dollars in cash. However, Luis Barrios, who owns the trucking company that employed Gutierrez Lugo, explained to the AP that customers who receive deliveries often pay drivers directly in cash.
Although no criminal charges were ultimately brought against Gutierrez Lugo, Barrios nonetheless said that his company had to spend $20,000 in legal fees to both clear his driver's name and to return company property that had been impounded by police.
The AP notes that operations such as this are symbolic of "the quiet transformation of [the US Border Patrol's] parent agency, US Customs and Border Protection, into something more akin to a domestic intelligence operation."
Former law enforcement officials also tell the AP that the Border Patrol has gone to great lengths to keep its mass surveillance program a secret by trying to ensure that it is never mentioned in court documents and police reports. In fact, the Border Patrol in some cases has even dropped criminal cases against suspects for fear that details about the mass surveillance program would be revealed at trial.
In a post on X, journalist Mike LaSusa remarked that this Border Patrol program represents "another example of powerful, invasive, mass surveillance tech being wielded by US immigration authorities." He added that "so much about these programs is hidden from the public, making it difficult to know whether they keep Americans safe or violate privacy protections."
The program has been increasingly expanding from the border regions of the US into the interior of the country as well, and it discovered that US Customs and Border Protection "has placed at least four cameras in the greater Phoenix area over the years, one of which was more than 120 miles (193 kilometers) from the Mexican frontier, beyond the agency’s usual jurisdiction of 100 miles (161 kilometers) from a land or sea border."
Additionally, the AP found that the program is "impacting residents of big metropolitan areas and people driving to and from large cities such as Chicago and Detroit, as well as from Los Angeles, San Antonio, and Houston to and from the Mexican border region."
Nicole Ozer, executive director of the Center for Constitutional Democracy at UC Law San Francisco, told the AP that US Customs and Border Protection is engaging in "dragnet surveillance of Americans on the streets, on the highways, in their cities, in their communities" while "collecting mass amounts of information about who people are, where they go, what they do, and who they know."
“These surveillance systems do not make communities safer," Ozer emphasized.
"An ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien," said the ranking member of the House Homeland Security Committee.
Immigration agents are using facial recognition software as "definitive" evidence to determine immigration status and is collecting data from US citizens without their consent. In some cases, agents may detain US citizens, including ones who can provide their birth certificates, if the app says they are in the country illegally.
These are a few of the findings from a series of articles published this past week by 404 Media, which has obtained documents and video evidence showing that Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) agents are using a smartphone app in the field during immigration stops, scanning the faces of people on the street to verify their citizenship.
The report found that agents frequently conduct stops that "seem to have little justification beyond the color of someone’s skin... then look up more information on that person, including their identity and potentially their immigration status."
While it is not clear what application the agencies are using, 404 previously reported that ICE is using an app called Mobile Fortify that allows ICE to simply point a camera at a person on the street. The photos are then compared with a bank of more than 200 million images and dozens of government databases to determine info about the person, including their name, date of birth, nationality, and information about their immigration status.
On Friday, 404 published an internal document from the Department of Homeland Security (DHS) which stated that "ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection." The document also states that the image of any face that agents scan, including those of US citizens, will be stored for 15 years.
The outlet identified several videos that have been posted to social media of immigration officials using the technology.
In one, taken in Chicago, armed agents in sunglasses and face coverings are shown accosting a pair of Hispanic teenagers on bicycles, asking where they are from. The 16-year-old boy who filmed the encounter said he is "from here"—an American citizen—but that he only has a school ID on him. The officer tells the boy he'll be allowed to leave if he'll "do a facial." The other officer then snaps a photo of him with a phone camera and asks his name.
In another video, also in Chicago, agents are shown surrounding a driver, who declines to show his ID. Without asking, one officer points his phone at the man. "I’m an American citizen, so leave me alone,” the driver says. "Alright, we just got to verify that,” the officer responds.
Even if the people approached in these videos had produced identification proving their citizenship, there's no guarantee that agents would have accepted it, especially if the app gave them information to the contrary.
On Wednesday, ranking member of the House Homeland Security Committee, Rep. Bennie Thompson (D-Miss.), told 404 that ICE agents will even trust the app's results over a person's government documents.
“ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien,” he said.
This is despite the fact that, as Nathan Freed Wessler, deputy director of the ACLU's Speech, Privacy, and Technology Project, told 404, “face recognition technology is notoriously unreliable, frequently generating false matches and resulting in a number of known wrongful arrests across the country."
Thompson said: "ICE using a mobile biometrics app in ways its developers at CBP never intended or tested is a frightening, repugnant, and unconstitutional attack on Americans’ rights and freedoms.”
According to an investigation published in October by ProPublica, more than 170 US citizens have been detained by immigration agents, often in squalid conditions, since President Donald Trump returned to office in January. In many of these cases, these individuals have been detained because agents wrongly claimed the documents proving their citizenship are false.
During a press conference this week, Homeland Security Secretary Kristi Noem denied this reality, stating that "no American citizens have been arrested or detained" as part of Trump's "mass deportation" crusade.
"We focus on those who are here illegally," she said.
But as DHS's internal document explains, facial recognition software is necessary in the first place because "ICE agents do not know an individual's citizenship at the time of the initial encounter."
David Bier, the director of immigration studies at the Cato Institute, explains that the use of such technology suggests that ICE's operations are not "highly targeted raids," as it likes to portray, but instead "random fishing expeditions."
The public must be vigilant about those who claim vigilance as a mandate without bounds. A republic cannot outsource its conscience to machines and contractors.
The feed has eyes. What you share to stay connected now feeds one of the world’s largest surveillance machines. This isn’t paranoia, it’s policy. You do not need to speak to be seen. Every word you read, every post you linger on, every silence you leave behind is measured and stored. The watchers need no warrant—only your attention.
Each post, like, and photograph you share enters a room you cannot see. The visible audience, friends and followers, is only the front row. Behind them sit analysts, contractors, and automated systems that harvest words at scale. Over the last decade, the federal security apparatus has turned public social media into a continuous stream of open-source intelligence. What began as episodic checks for imminent threats matured into standing watch floors, shared databases, and automated scoring systems that never sleep. The rationale is familiar: national security, fraud prevention, situational awareness. The reality is starker: Everyday conversation now runs through a mesh of government and corporate surveillance that treats public speech, and the behavior around it, as raw material.
You do not need to speak to be seen. The act of being online is enough. Every scroll, pause, and click is recorded, analyzed, and translated into behavioral data. Algorithms study not only what we share but what we read and ignore, and how long our eyes linger. Silence becomes signal, and absence becomes information. The watchers often need no warrant for public content or purchased metadata, only your connection. In this architecture of observation, even passivity is participation.
This did not happen all at once. It arrived through privacy impact assessments, procurement notices, and contracts that layered capability upon capability. The Department of Homeland Security (DHS) built watch centers to monitor incidents. Immigration and Customs Enforcement folded social content into investigative suites that already pull from commercial dossiers. Customs and Border Protection (CBP) linked open posts to location data bought from brokers. The FBI refined its triage flows for threats flagged by platforms. The Department of Defense and the National Security Agency fused foreign collection and information operations with real-time analytics.
Little of this resembles a traditional wiretap, yet the effect is broader because the systems harvest not just speech but the measurable traces of attention. Most of it rests on the claim that publicly available information is fair game. The law has not caught up with the scale or speed of the tools. The culture has not caught up either.
The next turn of the wheel is underway. Immigration and Customs Enforcement plans two round-the-clock social media hubs, one in Vermont and one in California, staffed by private contractors for continuous scanning and rapid referral to Enforcement and Removal Operations. The target turnaround for urgent leads is 30 minutes. That is not investigation after suspicion. That is suspicion manufactured at industrial speed. The new programs remain at the request-for-information stage, yet align with an unmistakable trend. Surveillance shifts from ad hoc to ambient, from a hand search to machine triage, from situational awareness to an enforcement pipeline that links a post to a doorstep.
The line between looking and profiling thins because the input is no longer just what we say but what our attention patterns imply.
Artificial intelligence makes the expansion feel inevitable. Algorithms digest millions of posts per hour. They perform sentiment analysis, entity extraction, facial matching, and network mapping. They learn from the telemetry that follows a user: time on page, scroll depth, replay of a clip, the cadence of a feed. They correlate a pseudonymous handle with a résumé, a family photo, and a travel record. Data brokers fill in addresses, vehicles, and associates. What once took weeks now takes minutes. Scale is the selling point. It is also the danger. Misclassification travels as fast as truth, and error at scale becomes a kind of policy.
George Orwell warned that “to see what is in front of one’s nose needs a constant struggle.” The struggle today is to see how platform design, optimized for engagement, creates the very data that fuels surveillance. Engagement generates signals, signals invite monitoring, and monitoring, once normalized, reshapes speech and behavior. A feed that measures both speech and engagement patterns maps our concerns as readily as our views.
Defenders of the current model say agencies only view public content. That reassurance misses the point. Public is not the same as harmless. Aggregation transforms meaning. When the government buys location histories from data brokers, then overlays them with social content, it tracks lives without ever crossing a courthouse threshold. CBP has done so with products like Venntel and Babel Street, as documented in privacy assessments and Freedom of Information Act releases. A phone that appears at a protest can be matched to a home, a workplace, a network of friends, and an online persona that vents frustration in a late-night post. Add behavioral traces from passive use, where someone lingers and what they never click, and the portrait grows intimate enough to feel like surveillance inside the mind.
The FBI’s posture has evolved as well, particularly after January 6. Government Accountability Office reviews describe changes to how the bureau receives and acts on platform tips, along with persistent questions about the balance between public safety and overreach. The lesson is not that monitoring never helps. The lesson is that systems built for crisis have a way of becoming permanent, especially when they are fed by constant behavioral data that never stops arriving. Permanence demands stronger rules than we currently have.
Meanwhile, the DHS Privacy Office continues to publish assessments for publicly available social media monitoring and situational awareness. These documents describe scope and mitigations, and they reveal how far the concept has stretched. As geospatial, behavioral, and predictive analytics enter the toolkit, awareness becomes analysis, and analysis becomes anticipation. The line between looking and profiling thins because the input is no longer just what we say but what our attention patterns imply.
The First Amendment restrains the state from punishing lawful speech. It does not prevent the state from watching speech at scale, nor does it account for the scoring of attention. That gap produces a chilling effect that is hard to measure yet easy to feel. People who believe they are watched temper their words and their reading. They avoid organizing, and they avoid reading what might be misunderstood. This is not melodrama. It is basic social psychology. Those who already live closer to the line feel the pressure first: immigrants, religious and ethnic minorities, journalists, activists. Because enforcement databases are not neutral, they reproduce historical biases unless aggressively corrected.
Error is not theoretical. Facial recognition has misidentified innocent people. Network analysis has flagged friends and relatives who shared nothing but proximity. A meme or a lyric, stripped of context, can be scored as a threat. Behavioral profiles amplify risk because passivity can be interpreted as intent when reduced to metrics. The human fail-safe does not always work because human judgment is shaped by the authority of data. When an algorithm says possible risk, the cost of ignoring it feels higher than the cost of quietly adding a name to a file. What begins as prudence ends as normalization. What begins as a passive trace ends as a profile.
Fourth Amendment doctrine still leans on the idea that what we expose to the public is unprotected. That formulation collapses when the observer is a system that never forgets and draws inferences from attention as well as expression. Carpenter v. United States recognized a version of this problem for cell-site records, yet the holding has not been extended to the government purchase of similar data from brokers or to the bulk ingestion of content that individuals intend for limited audiences. First Amendment jurisprudence condemns overt retaliation against speakers. It has little to say about surveillance programs that corrode participation, including the act of reading, without ever bringing a case to court. Due process requires notice and an opportunity to contest. There is no notice when the flag is silent and the consequences are dispersed across a dozen small harms, each one deniable. There is no docket for the weight assigned to your pauses.
Wendell Phillips wrote, “Eternal vigilance is the price of liberty.” The line is often used to defend surveillance. It reads differently from the other side of the glass. The public must be vigilant about those who claim vigilance as a mandate without bounds. A republic cannot outsource its conscience to machines and contractors.
You cannot solve a policy failure with personal hygiene, but you can buy time. Treat every post as a public record that might be copied, scraped, and stored. Remove precise locations from images. Turn off facial tagging and minimize connections between accounts. Separate roles. If you organize, separate that work from family and professional identities with different emails, phone numbers, and sign ins. Use two-factor authentication everywhere. Prefer end-to-end encrypted tools like Signal for sensitive conversations. Scrub photo metadata before upload. Search your own name and handles in a private browser, then request removal from data-broker sites. Build a small circle that helps one another keep settings tight and recognize phishing and social engineering. These habits are not retreat. They are discipline.
The right to be unobserved is not a luxury. It is the quiet foundation of every other liberty.
Adopt the same care for reading as for posting. Log out when you can, block third-party trackers, limit platform time, and assume that dwell time and scroll depth are being recorded. Adjust feed settings to avoid autoplay and personalized tracking where possible. Use privacy-respecting browsers and extensions that reduce passive telemetry. Small frictions slow the flow of behavioral data that feeds automated suspicion.
Push outward as well. Read the transparency reports that platforms publish. They reveal how often governments request data and how often companies comply. Support groups that litigate and legislate for restraint, including the Electronic Frontier Foundation, the Brennan Center for Justice, and the Center for Democracy and Technology. Demand specific reforms: warrant requirements for government purchase of location and browsing data, public inventories of social media monitoring contracts and tools, independent audits of watch centers with accuracy and bias metrics, and accessible avenues for redress when the system gets it wrong. Insist on disclosure of passive telemetry collection and retention, not only subpoenas for content.
The digital commons was built on a promise of connection. Surveillance bends that commons toward control. It does so quietly, through dashboards and metrics that reward extraction of both speech and attention. The remedy begins with naming what has happened, then insisting that the rules match the power of the tools. A healthy public sphere allows risk. It tolerates anger and error. It places human judgment above automated suspicion. It restores the burden of proof to the state. It recognizes that attention is speech by another name, and that freedom requires privacy in attention as well as privacy in voice.
You do not need to disappear to stay free. You need clarity, patience, and a stubborn loyalty to truth in a time that rewards distraction. The watchers will say the threat leaves no choice, that vigilance demands vision turned outward. History says freedom depends on the courage to look inward first. The digital world was built as a commons, a place to connect and create, yet it is becoming a hall of mirrors where every glance becomes a record and every silence a signal. Freedom will not survive by accident. It must be practiced—one mindful post, one untracked thought, one refusal to mistake visibility for worth. The right to be unobserved is not a luxury. It is the quiet foundation of every other liberty. Guard even the silence, for in the end it may be the only voice that still belongs to you.