SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
If Democrats want to regain trust ahead of the 2026 elections, they need to show they are willing to take on Big Tech with the urgency that everyday Americans are demanding.
One year ago, Mark Zuckerberg, Elon Musk, and Jeff Bezos got front-row seats at President Donald Trump’s inauguration. The images of CEOs enjoying better seats than congressional leaders foreshadowed exactly how much access and influence Big Tech would wield in the Trump White House.
Since entering office, Trump has repeatedly signaled deference to a small group of powerful technology executives, aided by advisors like AI czar David Sacks who have spent their careers profiting from the industry. With Trump’s blessing, companies like NVIDIA are now poised to profit from sales of advanced chips to China, America’s foremost strategic competitor. That choice exposes a fundamental contradiction at the heart of the administration’s AI policy: prioritizing short-term corporate gains over long-term public interests.
In December, Trump signed an executive order threatening states for enacting AI safety laws without offering a credible federal framework to replace them. It was yet another misuse of executive power—and an industry giveaway disguised as a competitiveness strategy. By threatening states for acting while offering no federal safeguards in return, the order attempts to clear the field for companies that have spent years lobbying against meaningful accountability.
While Republicans move to shield companies from accountability and block reasonable state action without offering meaningful protections, Democrats can articulate a smarter approach.
Supporters argue that preemption is necessary to help the United States compete with China. But if that’s true, why is the president offering the Chinese Communist Party access to superior American technology and a clear path to win the AI race?
That contradiction hasn’t gone unnoticed, even inside Trump’s own coalition. Indeed, most Americans continue to express deep concern about Trump’s growing alignment with Silicon Valley.
Still, Trump has only doubled down, pushing a vision of global “tech dominance” with little regard for the real-world consequences of unprecedented AI investment. Even Republicans who were once vocal critics of Big Tech are now taking money from Meta and other companies to accelerate AI on industry-friendly terms.
For Democrats, this should be a moment of clarity—and a moment to lead. While many lawmakers have raised legitimate concerns about AI’s risks, the party’s response has too often leaned on commissions, task forces, and studies when the public is asking for clear rules and accountability.
Democrats must ask themselves: if Big Tech is already working overtime to block meaningful safeguards, why not meet the moment by standing clearly on the side of consumers, parents, and workers? Voters are asking for real leadership, but all they are seeing is a familiar pattern: billion-dollar companies consolidating power, writing the rules, and dodging accountability, leaving children, workers, and democratic institutions to deal with the consequences.
The 2024 election underscored a deeper challenge for Democrats than economic uncertainty or flawed candidates. Many voters struggled to see a coherent vision for the future under Democratic leadership. That vacuum has allowed Republicans to posture as pro-consumer and pro-family while quietly shielding powerful companies from accountability.
The debate over AI offers Democrats a chance to do better. While Republicans move to shield companies from accountability and block reasonable state action without offering meaningful protections, Democrats can articulate a smarter approach: clear expectations for safety; real liability when technology causes harm; serious preparation for economic disruption; and responsible planning for AI’s massive energy demands.
AI is no longer an abstract idea; its impacts are already being felt. But without clear rules, it risks reshaping our economy, labor markets, and democratic institutions in ways that undermine security, opportunity, and trust. When elected leaders prioritize the agendas of their corporate executives over the long-term public interest, trust erodes—not just in institutions, but in innovation itself.
That erosion of trust is already visible. Workers worry about job displacement, recent graduates struggle to enter a rapidly-changing workforce, and parents fear how algorithmic manipulation and AI-generated deepfakes will shape their children’s reality. These concerns aren’t partisan. This shared national anxiety goes to the heart of the American experiment.
If Democrats want to regain trust ahead of the 2026 elections, they need to show they are willing to take on Big Tech with the urgency that everyday Americans are demanding. That means recognizing that AI isn’t just another talking point, and pursuing strong, enforceable standards now—so its extraordinary potential strengthens the middle class, improves our children’s future, and reinforces democratic institutions rather than undermining them.
"During a disaster... Waymos would be blocking evacuation routes. Hard to believe no one asked these questions, until you realize that good governance is suspended when billionaires knock on the door," said one observer.
A citywide Pacific Gas & Electric power outage Saturday in San Francisco paralyzed Waymo autonomous taxis, exacerbating traffic chaos and prompting a fleet-wide shutdown—and calls for more robust robotaxi regulation.
Around 130,000 San Francisco homes and businesses went dark due to an afternoon fire at a PG&E substation in the city's South of Market neighborhood. While most PG&E customers had their electricity restored by around 9:00 pm, more than 20,000 rate-payers remained without power on Sunday morning, according to the San Francisco Standard.
The blackout left traffic lights inoperable, rendering much of Waymo's fleet of around 300 robotaxis "stuck and confused," as one local resident put it, as cascading failures left groups of as many as half a dozen of the robotaxis immobile. In some cases, the stopped vehicles nearly caused collisions.
On a walk across San Francisco on Saturday night prior to the fleet grounding at around 7:00 pm, this reporter saw numerous Waymos stuck on streets or in intersections, while others seemed to surrender, pulling or even backing out of intersections and parking themselves where they could.
Bad look for Waymo. Lots of reports out of SF where the power outage caused its robotaxis to stop in traffic, causing jams.
On the other side, the Tesla robotaxi fleet (& personal FSD users) continued the service without hiccups.
Not clear if Waymo vehicles themselves are… pic.twitter.com/DexuAh0Bpt
— Jaan of the EVwire.com ⚡ (@TheEVuniverse) December 21, 2025
"There are a lot of unique road scenarios on the roads I can see being hard to anticipate and you just hope your software can manage it. 'What if we lose contact with all our cars due to a power outage' is something you should have a meeting and a plan about ahead of time," Fast Company digital editor Morgan Clendaniel—a self-described "big Waymo guy"—said Sunday on Bluesky.
Clendaniel called the blackout "a predictable scenario [Waymo] should have planned for, when clearly they had no plan, because 'they all just stop' is not a plan and is not viable for city roads in an emergency."
Waymo—which is owned by Alphabet, the parent company of Google—said it is "focused on keeping our riders safe and ensuring emergency personnel have the clear access they need to do their work.”
Oakland Observer founder and publisher Jaime Omar Yassin said on X, "as others have noted, during a disaster with a consequent power outage, Waymos would be blocking evacuation routes. Hard to believe no one asked these questions, until you realize that good governance is suspended when billionaires knock on the door."
"Waymo's problems are known to anyone paying attention," he added. "At a recent anti-[Department of Homeland Security] protest that occurred coincidentally not far from a Waymo depot, vehicles simply left [the] depot and jammed [the] street behind a police van far from [the] protest that wasn't blocking traffic."
Waymo came to dominate the San Francisco robotaxi market after the California Public Utilities Commission suspended the permit of leading competitor Cruise to operate driverless taxis over public safety concerns following an October 2023 incident in which a pedestrian was critically injured when a Cruise car dragged her 20 feet after she was struck by a human-driven vehicle. The CPUC accused Cruise of covering up the details of the accident.
Some California officials have called for more robust regulation of robotaxis like Waymo. But last year, a bill introduced by state Sen. Dave Cortese (D-15) that would have empowered county and municipal governments "to protect the public through local governance of autonomous vehicles" failed to pass after it was watered down amid pressure from industry lobbyists.
In San Francisco, progressive District 9 Supervisor Jackie Fielder said during a press conference last month after a Waymo ran over and killed a beloved Mission District bodega cat named KitKat that while Waymo "may treat our communities as laboratories and human beings and our animals as data points, we in the Mission do not."
Waymo claimed that KitKat "darted" under its car, but security camera video footage corroborated witness claims to Mission Local that the cat had been sitting in front of the vehicle for as long as eight seconds before it was crushed.
Fielder lamented that "the fate of autonomous vehicles has been decided behind closed doors in Sacramento, largely by politicians in the pocket of big tech and tech billionaires."
The first-term supervisor—San Francisco's title for city council members—is circulating a petition "calling on the California State Legislature and [Gov. Gavin Newsom] to give counties the right to vote on whether autonomous vehicles can operate in their areas."
"This would let local communities make decisions that reflect their needs and safety concerns, while also addressing state worries about intercity consistency," Fielder wrote.
Other local progressives pointed to the citywide blackout as more proof that PG&E—whose reputation has been battered by incidents like the 2018 Camp Fire, which killed 85 people in Butte County and led to the company pleading guilty to 84 counts of involuntary manslaughter—should be publicly run, as progressive advocacy groups have urged for years.
The San Francisco power outage is absolutely unacceptable. There are still people & businesses in SF that don’t have power. I can’t imagine what this is like for the elderly & people with disabilities. PG&E should not be a private company.
[image or embed]
— Nadia Rahman 駱雯 (@nadiarahman.bsky.social) December 21, 2025 at 10:35 AM
"Sacramento and Palo Alto don’t have PG&E, they have public power," progressive Democratic congressional candidate Saikat Chakrabarti said Sunday on X. "They pay about half as much as us in utility bills and do not have weekend-long power outages. We could have that in San Francisco."
The only thing that definitively clears suspicion for ICE is biometric identification. The presumption is that people may lie, documents may be forged, but biometric scans are objective and certain. People are guilty until an algorithm proves them innocent.
On December 9, Mubashir, a Minneapolis man who has chosen to only disclose his first name, was wrongly arrested by Immigration and Custom Enforcement for the crime of stepping “outside as a Somali American.” During his lunch break, masked men tackled him onto the ground, dragged him across the road, choked, and restrained him. Mubashir insisted that he was a US citizen. He repeatedly offered to show the men his digital passport, as well as to provide his name and date of birth to prove his citizenship. The agents refused.
Instead, they forced him to undergo a facial recognition scan to prove his identity. After several failed attempts to scan his face, he was arrested and taken to a detainment center. Mubashir was held for several hours without medical assistance or water, until eventually he was given the opportunity to present his passport. He was released after being subjected to fingerprint scanning.
Mubashir’s case is horrifying, but it’s becoming a common occurrence in President Donald Trump’s America. In April, Juan Carlos Lopez-Gomez was arrested, detained. and threatened with deportation after “biometrics indicated he was not a citizen.” This, despite his insistence that he was a US-born citizen and offering his Real ID as proof. Lopez-Gomez was eventually released once his story gained national news coverage.
Another example: two ICE agents stopped Jesus Gutiérrez after he exited a Chicago gym. He didn’t have any identification on him, but he told officers he was a US citizen. Agents took a facial scan using the app Mobile Fortify to determine his legal status. While Gutiérrez wasn’t arrested, the experience left him traumatized.
Somehow, for the Trump administration, a voter ID is enough to prove one’s citizenship at the ballot box, but a Real ID is not enough proof if masked men randomly assault and question you about your legal status on the street.
In each of these cases, a person of color is stopped without probable cause or justification, forced to undergo biometric scans, and has their freedom left to the discretion of an algorithm.
These technologies function to silence those whose rights are being violated. Mubashir, Lopez-Gomez, and Gutiérrez all insisted that they were citizens—they all told the truth. However, for those agents, their words, even their state and federal documentation, were insufficient. Under ICE’s technologically driven terrorism, the only thing that definitively clears suspicion is biometric identification. The presumption is that people may lie, documents may be forged, but biometric scans are objective and certain. People are guilty until an algorithm proves them innocent.
However, biometric scanners are far from precision tools. Several of the problems with these technologies are spelled out in the Biometric Technology Report jointly submitted by the Department of Homeland Security (DHS), the Department of Justice (DOJ), and the White House Office of Science and Technology Policy (OSTP). According to the report, factors such as “facial features, expressions, obstructions, exposure, and image quality” can all influence the results of biometric scanners. Moreover, a “key challenge” for facial recognition algorithms is that they are more likely to err “when comparing images of two people who look comparatively similar,” such as family members. These algorithms also “yield consistently higher false positive match rates when applied to racial minorities.” This is the algorithmic bias problem.
DHS, as a co-author of the report, is clearly aware of these problems. Yet, they still choose to prioritize these algorithms when confronting people they merely suspect of being undocumented—a feature that is impossible to tell simply by looking at a person.
This choice, however, is strategic. DHS and ICE are using these algorithms to help minimize their own responsibility. If Mubashir is arrested, it’s because the biometric scan was inconclusive. If Lopez-Gomez is detained, it’s because the algorithm says so. If Gutiérrez is released, it’s because the algorithm cleared him. The responsibility for the arrests, threats, and psychological harms these people experience has now been offshored onto an algorithm that cannot be held accountable.
After all, if the algorithm incorrectly identifies you as being undocumented, who do you appeal to? Even if the system is wrong, it’s now the voice of the accused against a voiceless algorithm. Unless an actual person is finally willing to listen to you, your words and documents won’t matter. Unless the press—an institution that is constantly under attack by the Trump administration—raises the alarm on your behalf, you may find yourself detained for weeks.
Even if someone speaks out after they’re released, DHS simply denies any wrongdoing. Despite more than 170 confirmed cases of US citizens being kidnapped by ICE agents, Homeland Security Secretary Kristi Noem still claims that “we have never once detained or deported an American citizen. We have not held them or charged them. When we find their identity, then that is when they are released.”
What’s interesting here is this notion that “their identity” must be found, as if it’s some grand mystery that requires an entire array of surveillance and identification technologies. As if this problem hasn’t already been solved by the invention of identification documents. Somehow, for the Trump administration, a voter ID is enough to prove one’s citizenship at the ballot box, but a Real ID is not enough proof if masked men randomly assault and question you about your legal status on the street.
DHS claims that biometrics “help enable operational missions, both to support national security and public safety, and deliver benefits and services with greater efficiency and accuracy.” The reality is that these technologies widen the scope of who is vulnerable to ICE’s secret police. So long as the algorithm legitimizes the agent’s racial profiling, anyone can become a legitimate target of state violence. This violence has already been judicially legitimized by Supreme Court Justice Brett Kavanaugh’s absurd ruling that immigration agents can deliberately target people on the basis of race, language, employment, or location.
The threat of biometric and surveillance technologies is only growing larger. DHS is still heavily investing in more invasive technologies that target undocumented immigrants and citizens alike. This will be a different struggle, but there are things we can do right now. First, we need to support independent news organizations that work to keep the public informed. The extent to which we know about many of these technologies is due entirely to the incredible work being done by journalists.
Second, we need to build tools and networks to support each other. This includes developing our own technologies to warn people about ICE raids, such as the website “People over Papers” and the “ICEBlock” app. Recording and posting pictures of ICE’s cruelty to popular social media sites is also incredibly important. The people who recorded Mubashir’s illegal arrest helped his story become national news.
Third, we need to put more pressure on Democrats to curb this violence. Democratic candidates running in 2026 are already integrating calls to “Abolish ICE” into their platforms. There is also movement at the state and federal level to stop ICE kidnappings. This includes bills like California’s SB 805 and SB 627 and Illinois’ HB1312, as well as HR 4456 and HR 4843. Even the recent House Homeland Security Committee saw Democrats holding Noem responsible for ICE’s abuses. These are positive steps, but more work is still needed.
While the road will be daunting, together, we can keep each other safe.