SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The buildout of lots and lots of power-gobbling data centers is not as inevitable as it appears.
Caveat—this post was written entirely with my own intelligence, so who knows. Maybe it’s wrong.
But the despairing question I get asked most often is: “What’s the use? However much clean energy we produce, AI data centers will simply soak it all up.” It’s too early in the course of this technology to know anything for sure, but there are a few important answers to that.
The first comes from Amory Lovins, the long-time energy guru who wrote a paper some months ago pointing out that energy demand from AI was highly speculative, an idea he based on… history:
In 1999, the US coal industry claimed that information technology would need half the nation’s electricity by 2020, so a strong economy required far more coal-fired power stations. Such claims were spectacularly wrong but widely believed, even by top officials. Hundreds of unneeded power plants were built, hurting investors. Despite that costly lesson, similar dynamics are now unfolding again.
As Debra Kahn pointed out in Politico a few weeks ago:
So far, data centers have only increased total US power demand by a tiny amount (they make up roughly 4.4 percent of electricity use, which rose 2 percent overall last year).
And it’s possible that even if AI expands as its proponents expect, it will grow steadily more efficient, meaning it would need much less energy than predicted. Lovins again:
For example, NVIDIA’s head of data center product marketing said in September 2024 that in the past decade, “we’ve seen the efficiency of doing inferences in certain language models has increased effectively by 100,000 times. Do we expect that to continue? I think so: There’s lots of room for optimization.” Another NVIDIA comment reckons to have made AI inference (across more models) 45,000× more efficient since 2016, and expects orders-of magnitude further gains. Indeed, in 2020, NVIDIA’s Ampere chips needed 150 joules of energy per inference; in 2022, their Hopper successors needed just 15; and in 2024, their Blackwell successors needed 24 but also quintupled performance, thus using 31× less energy than Ampere per unit of performance. (Such comparisons depend on complex and wideranging assumptions, creating big discrepancies, so another expert interprets the data as up to 25× less energy and 30× better performance, multiplying to 750×.)
But that doesn’t mean that the AI industry, and its utility and government partners, won’t try to build ever more generating capacity to supply whatever power needs they project may be coming. In some places they already are: Internet Alley in Virginia has more than 150 large centers, using a quarter of its power. This is becoming an intense political issue in the Old Dominion State. As Dave Weigel reported yesterday, the issue has begun to roil Virginia politics—the GOP candidate for governor sticks with her predecessor, Glenn Youngkin, in somehow blaming solar energy for rising electricity prices (“the sun goes down”), while the Democratic nominee, Abigail Spanberger, is trying to figure out a response:
Neither nominee has gone as far in curbing growth as many suburban DC legislators and activists want. They see some of the world’s wealthiest companies getting plugged into the grid without locals reaping the benefits. Some Virginia elections have turned into battles over which candidate will be toughest on data centers; others elections have already been lost over them.
“My advice to Abigail has been: Look at where the citizens of Virginia are on the data centers,” said state Sen. Danica Roem, a Democrat who represents part of Prince William County in DC’s growing suburbs. “There are a lot of people willing to be single-issue, split-ticket voters based on this.”
Indeed, it’s shaping up to be the mother of all political issues as the midterms loom—pretty much everyone pays electric rates, and under President Donald Trump they’re starting to skyrocket. The reason isn’t hard to figure out: He’s simultaneously accelerating demand with his support for data center buildout, and constricting supply by shutting down cheap solar and wind. In fact, one way of looking at AI is that it’s main use is as a vehicle to give the fossil fuel industry one last reason to expand.
If this sounds conspiratorial, consider this story from yesterday: John McCarrick, newly hired by industry colossus OpenAI to find energy sources for ChatGPT is:
an official from the first Trump administration who is a dedicated champion of natural gas.
John McCarrick, the company’s new head of Global Energy Policy, was a senior energy policy advisor in the first Trump administration’s Bureau of Energy Resources in the Department of State while under former Secretaries of State Rex Tillerson and Mike Pompeo.
As deputy assistant secretary for Energy Transformation and the special envoy for International Energy Affairs, McCarrick promoted exports of American liquefied natural gas to Europe in the wake of the Russian invasion of Ukraine, and advocated for Asian countries to invest in natural gas.
The choice to hire McCarrick matches the intentions of OpenAI’s Trump-dominating CEO Sam Altman, who said in a U.S. Senate hearing in May that “in the short term, I think [the future of powering AI] probably looks like more natural gas.”
Sam Altman himself is an acolyte of Peter Thiel, famous climate denier who recently suggested Greta Thunberg might be the anti-Christ. But it’s all of them. In the rush to keep their valuations high, the big AI players are increasingly relying not just on fracked gas but on the very worst version of it. As Bloomberg reported early in the summer:
The trend has sparked an unlikely comeback for a type of gas turbine that long ago fell out of favor for being inefficient and polluting… a technology that’s largely been relegated to the sidelines of power production: small, single cycle natural gas turbines.
In fact, big suppliers are now companies like Caterpillar, not known for cutting edge turbine technology; these are small and comparatively dirty units.
(The ultimate example of this is Elon Musk’s Colossus supercomputer in Memphis, a superpolluter, which I wrote about for the New Yorker.) Oh, and it’s not just air pollution. A new threat emerged in the last few weeks, according to Tom Perkins in the Guardian:
Advocates are particularly concerned over the facilities’ use of Pfas gas, or f-gas, which can be potent greenhouse gases, and may mean datacenters’ climate impact is worse than previously thought. Other f-gases turn into a type of dangerous compound that is rapidly accumulating across the globe.
No testing for Pfas air or water pollution has yet been done, and companies are not required to report the volume of chemicals they use or discharge. But some environmental groups are starting to push for state legislation that would require more reporting.
Look, here’s one bottom line: If we actually had to build enormous networks of AI data centers, the obvious, cheap, and clean way to do it would be with lots of solar energy. It goes up fast. As an industry study found as long ago as December of 2024 (an eon in AI time):
Off-grid solar microgrids offer a fast path to power AI datacenters at enormous scale. The tech is mature, the suitable parcels of land in the US Southwest are known, and this solution is likely faster than most, if not all, alternatives.
As one of the country’s leading energy executives said in April:
“Renewables and battery storage are the lowest-cost form of power generation and capacity,” according to Next Era chief executive John Ketchum l. “We can build these projects and get new electrons on the grid in 12 to 18 months.”
But we can’t do that because the Trump administration has a corrupt ideological bias against clean energy, the latest example of which came last week when a giant Nevada solar project was cancelled. As Jael Holzman was the first to report:
Esmeralda 7 was supposed to produce a gargantuan 6.2 gigawatts of power–equal to nearly all the power supplied to southern Nevada by the state’s primary public utility. It would do so with a sprawling web of solar panels and batteries across the western Nevada desert. Backed by NextEra Energy, Invenergy, ConnectGen, and other renewables developers, the project was moving forward at a relatively smooth pace under the Biden administration.
But now it’s dead. One result will be higher prices for consumers. Despite everything the administration does, renewables are so cheap and easy that markets just keep choosing them. To beat that means policy as perverse as what we’re seeing—jury-rigging tired gas turbines and refitting ancient coal plants. All to power a technology that… seems increasingly like a bubble?
Here we need to get away from energy implications a bit, and just think about the underlying case for AI, and specifically the large language models that are the thing we’re spending so much money and power on. The AI industry is, increasingly, the American economy—it accounts for almost half of US economic growth this year, and an incredible 80% of the expansion of the stock market. As Ruchir Shirma wrote in the FT last week, the US economy is “one big bet” on AI:
The main reason AI is regarded as a magic fix for so many different threats is that it is expected to deliver a significant boost to productivity growth, especially in the US. Higher output per worker would lower the burden of debt by boosting GDP. It would reduce demand for labour, immigrant or domestic. And it would ease inflation risks, including the threat from tariffs, by enabling companies to raise wages without raising prices.
But for this happy picture to come to pass, AI has to actually work, which is to say do more than help kids cheat on their homework. And there’s been a growing sense in recent months that all is not right on that front. I’ve been following two AI skeptics for a year or so, both on Substack (where increasingly, in-depth and non-orthodox reporting goes to thrive).
The first is Gary Marcus, an AI researcher who has concluded that the large language models like Chat GPT are going down a blind alley. If you like to watch video, here is an encapsulation of his main points, published over the weekend. If you prefer that old-fashioned technology of reading (call me a Luddite, but it seems faster and more efficient, and much easier to excerpt), here’s his recent account from the Times explaining why businesses are having trouble finding reasons to pay money for this technology:
Large language models have had their uses, especially for coding, writing, and brainstorming, in which humans are still directly involved. But no matter how large we have made them, they have never been worthy of our trust.
Indeed, an MIT study this year found that 95% of businesses reported no measurable increase in productivity from using AI; the Harvard Business Review, a couple of weeks ago, said AI "'workslop' was cratering productivity.”
And what that means, in turn, is that there’s no real way to imagine recovering the hundreds of billions and trillions that are currently being invested in the technology. The keeper of the spreadsheets is the other Substacker, Ed Zitron, who writes extremely long and increasingly exasperated essays looking at the financial lunacy of these “investments” which, remember, underpin the stock market at the moment. Here’s last week’s:
In fact, let me put it a little simpler: All of those data center deals you’ve seen announced are basically bullshit. Even if they get the permits and the money, there are massive physical challenges that cannot be resolved by simply throwing money at them.
Today I’m going to tell you a story of chaos, hubris and fantastical thinking. I want you to come away from this with a full picture of how ridiculous the promises are, and that’s before you get to the cold hard reality that AI fucking sucks.
I’m not pretending this is the final word on this subject. No one knows how it’s all going to work out, but my guess is: badly. Already it’s sending electricity prices soaring and increasing fossil fuel emissions.
But maybe it’s also running other kinds of walls that will eventually reduce demand. Maybe human beings will decide to be… human. The new Sora “service” launched by OpenAI that allows your AI to generate fake videos, for instance, threatens to undermine the entire business of looking at videos because… what’s the point? If you can’t tell if the guy eating a ridiculously hot chili pepper is real or not, why would you watch? In a broader sense, as John Burns-Murdoch wrote in the FT (and again how lucky Europe is to have a reputable business newspaper), we may be reaching “peak social media":
It has gone largely unnoticed that time spent on social media peaked in 2022 and has since gone into steady decline, according to an analysis of the online habits of 250,000 adults in more than 50 countries carried out for the FT by the digital audience insights company GWI.
And this is not just the unwinding of a bump in screen time during pandemic lockdowns—usage has traced a smooth curve up and down over the past decade-plus. Across the developed world, adults aged 16 and older spent an average of two hours and 20 minutes per day on social platforms at the end of 2024, down by almost 10 per cent since 2022. Notably, the decline is most pronounced among the erstwhile heaviest users—teens and 20-somethings.
Which is to say: Perhaps at some point we’ll begin to come to our senses and start using our brains and bodies for the things they were built for: contact with each other, and with the world around us. That’s a lot to ask, but the world can turn in good directions as well as bad. As a final word, there’s this last week from Pope Leo, speaking to a bunch of news executives around the world, and imploring them to cool it with the junk they’re putting out:
Communication must be freed from the misguided thinking that corrupts it, from unfair competition, and the degrading practice of so-called clickbait.
Stay tuned. This story will have a lot to do with how the world turns out.
"For any Democrat who wants to think politically, what an opportunity,” said Faiz Shakir, a longtime adviser to US Sen. Bernie Sanders. “The people are way ahead of the politicians.”
America's biggest tech firms are facing an increasing backlash over the energy-devouring data centers they are building to power artificial intelligence.
Semafor reported on Monday that opposition to data center construction has been bubbling up in communities across the US, as both Republican and Democratic local officials have been campaigning on promises to clamp down on Silicon Valley's most expensive and ambitious projects.
In Virginia's 30th House of Delegates district, for example, both Republican incumbent Geary Higgins and Democratic challenger John McAuliff have been battling over which one of them is most opposed to AI data center construction in their region.
In an interview with Semafor, McAuliff said that opposition to data centers in the district has swelled up organically, as voters recoil at both the massive amount of resources they consume and the impact that consumption is having on both the environment and their electric bills.
"We’re dealing with the biggest companies on the planet,” he explained. “So we need to make sure Virginians are benefiting off of what they do here, not just paying for it.”
NPR on Tuesday similarly reported that fights over data center construction are happening nationwide, as residents who live near proposed construction sites have expressed concerns about the amount of water and electricity they will consume at the expense of local communities.
"A typical AI data center uses as much electricity as 100,000 households, and the largest under development will consume 20 times more," NPR explained, citing a report from the International Energy Agency. "They also suck up billions of gallons of water for systems to keep all that computer hardware cool."
Data centers' massive water use has been a consistent concern across the US. The Philadelphia Inquirer reported on Monday that residents of the township of East Vincent, Pennsylvania have seen their wells dry up recently, and they are worried that a proposed data center would significantly exacerbate water shortages.
This is what has been happening in Mansfield, Georgia, a community that for years has experienced problems with its water supply ever since tech giant Meta began building a data center there in 2018.
As BBC reported back in August, residents in Mansfield have resorted to buying bottled water because their wells have been delivering murky water, which they said wasn't a problem before the Meta data center came online. Although Meta has commissioned a study that claims to show its data center hasn't affected local groundwater quality, Mansfield resident Beverly Morris told BBC she isn't buying the company's findings.
"My everyday life, everything has been affected," she said, in reference to the presence of the data center. "I've lived through this for eight years. This is not just today, but it is affecting me from now on."
Anxieties about massive power consumption are also spurring the backlash against data centers, and recent research shows these fears could be well founded.
Mike Jacobs, a senior energy manager at the Union of Concerned Scientists, last month released an analysis estimating that data centers had added billions of dollars to Americans' electric bills across seven different states in recent years. In Virginia alone, for instance, Jacobs found that household electric bills had subsidized data center transmission costs to the tune of $1.9 billion in 2024.
"The big tech companies rushing to build out massive data centers are worth trillions of dollars, yet they’re successfully exploiting an outdated regulatory process to pawn billions of dollars of costs off on families who may never even use their products," Jacobs explained. "People deserve to understand the full extent of how data centers in their communities may affect their lives and wallets. This is a clear case of the public unknowingly subsidizing private companies' profits."
While the backlash to data centers hasn't yet become a national issue, Faiz Shakir, a longtime adviser to US Sen. Bernie Sanders (I-Vt.), predicted in an interview with Semafor that opposition to their construction would be a winning political issue for any politician savvy enough to get ahead of it.
“For any Democrat who wants to think politically, what an opportunity,” he said. “The people are way ahead of the politicians.”
This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad.
President Donald Trump has recently been touring around with an entourage of Big Tech CEOs, proselytizing their massive profits and future prospects of more gain via advances in AI. At a recent event at the White House, First Lady Melania Trump, who is chairing the Artificial Intelligence Education task force, claimed that “[t]he robots are here. Our future is no longer science fiction.”
While much focus has been given to AI in education and the workplace, less has found its way to militarized AI, despite its widespread usage. When thinking of military AI, it’s easy to conjure up images of Terminator, the Matrix, HAL from 2001: A Space Odyssey, or the “Entity” from the newest Mission Impossible. Doomsday scenarios where AI goes rogue or becomes the driver of a machine-driven war against humanity are common Hollywood stories. As Melania claimed, it’s easy to imagine that “the robots are here.”
But for now, these situations are far from reality. Despite the US military and Big Tech hyping up militarized AI—funding and promising autonomous weapons, drone swarms, precision warfare, and battles at hyperspeed—the truth is that the vision is way beyond the capabilities of current systems.
But that does not mean that militarized AI is not dangerous—quite the opposite. The present danger is that the US government is employing unregulated and untested AI systems to conduct mass surveillance, mass deportations, and targeted crackdowns on dissent. All the while, Big Tech is profiting enormously off of fantasy projects, sold on visions of autonomous warfare, and a desire for authoritarian control. The new AI-centered military-industrial complex is indeed a tremendous threat to democratic society.
US military plans for the modern AI wave go back to 2018 with the Department of Defense (DOD) Artificial Intelligence Strategy. This document set the tone for militarized AI strategy for the subsequent years to come, as well as the foundations for how to pursue it. The 2018 AI Strategy prioritizes a few key points: (1) AI supremacy is essential for national security, (2) AI supremacy is essential for preserving US market supremacy, (3) China and Russia are the main AI competitors threatening US AI supremacy, and (4) the US government must rapidly pursue strategic partnerships with industry and academia to develop and push AI to achieve the prior three goals.
Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states.
In the years following, the Army followed suit by releasing a 2019 Army Modernization Strategy, similarly uplifting Russia and China as main threats. Yet this report went further than the 2018 Strategy, arguing that China and Russia are developing AI-based armed forces, hypersonic missiles, robotics, and swarming technologies. In 2021, one final, albeit massive, AI document was published by the US government: the National Security Commission on AI (NSCAI) report. This temporary commission was headed by Eric Schmidt, the former CEO of Google, who has been deeply involved in AI and military projects since leaving the company. The NSCAI report introduced a new lens to the AI and military equation: focusing on AI enabling informational advantages on the battlefield including enhanced decision-making, cyber operations, information warfare, and constant monitoring of the battlefield.
True to the goals of the 2018 AI Strategy, the Pentagon has built lasting partnerships with Big Tech to research and develop militarized AI tools. Domestically, major technology companies like Google, Microsoft, Amazon, and Palantir have taken on a host of projects for the government to the tune of hundreds of millions, or sometimes billions, of dollars in contract fees. Crescendo, a research project jointly conducted by the Action Center on Race and the Economy (ACRE), MPower Change, and Little Sis, has calculated that Amazon has netted over $1 billion in DOD and $78 million in Department of Homeland Security (DHS) contracts, Microsoft $42 billion (DOD) and $226 million (DHS), and Google $16 million (DOD) and $2 million (DHS).
Moreover, Big Tech has also profited enormously from militarized AI developed for foreign nations, especially Israel. In 2021, Google was under fire for their new $1.2 billion-valued Project Nimbus, a system developed for Israel to use AI systems like object detection and emotion detection to enhance Israeli military operations in the Occupied Territories. Google and Amazon have continued work on Project Nimbus, despite continued protests. Recently, Microsoft also came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians.
These relationships have fundamentally changed the landscape of the military-industrial complex, adding in a new dimension of AI-powered systems. Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states. Without even considering the actual systems themselves, this dynamic is a dangerous escalation in the domination of tech companies over democratic society.
Despite the enormous funding given to Big Tech to develop militarized AI, the systems in reality are not in line with the most ambitious visions of the government. By and large, the systems developed for domestic use include projects to develop and store massive biometric databases of individuals living in the US, or strengthen immigration policy and deportation enforcement. Police departments across the US have been adopting facial recognition technologies for use in ordinary cases. AI systems have been deployed to surveil social media of international students to deport pro-Palestine activists. It was recently reported that Immigration and Customs Enforcement will be using Israeli spyware to enhance its deportation agenda.
For projects used abroad, it seems that the dominant systems are ones that process information for surveillance. Both Maven and Nimbus were designed to use AI for battlefield advantage through information, via mapping social networks or identifying objects that could be potential targets. Microsoft recently came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians. Palantir has also been in the spotlight for working on surveillance tools.
There is a significant mismatch between the hype featured in US AI plans and Big Tech rhetoric, and the actual uses we observe. In fact, dissatisfaction with this discrepancy appears to be simmering inside of the military itself. In October of 2024, Paul Lushenko, a US army lieutenant colonel and instructor at the US Army War College, and Keith Carter, an associate professor at the US Naval War College, wrote a piece for the Bulletin of the Atomic Scientists critiquing AI hype in the military. They argue that “tech industry figures have little to no operational experience… they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not the nature, of war.” They contest visions of autonomous weapons and AI-driven warfare, claiming that that “the current debate on military AI is largely driven by ‘tech bros’ and other entrepreneurs who stand to profit immensely from the military’s uptake of AI-enabled capabilities.”
Yet even if military applications of AI are not panning out, it does not mean that AI technologies are not being used for control and domination in dangerous ways. This point becomes especially clear if we move from analyzing military AI to militarization through AI. Jessica Katzenstein, in a report for Brown University’s Costs of War project, warns of increases in militarism broadly as a threat that is potentially more pervasive than weapons themselves. She defines militarism as “the use of military language, counterinsurgency tactics, the spread of police paramilitary units, and military-derived ideologies about legitimate and moral uses of violence.”
AI technologies that assist in surveillance, targeting protesters, and deporting immigrants are indeed escalations in US militarism. It seems that government and Big Tech have figured out that these applications are possible and also extremely profitable—a worrying development in the fight for democratic society. Every militarized AI project Big Tech develops contributes to justifications of violence and oppression, especially for those sympathetic to technology and AI culture.
As militarized AI continues to be funded and developed within strengthening government-Big Tech partnerships, we should focus dissent on the AI systems currently terrorizing society while keeping a vigilant eye on likely future escalations. Current militarized AI being used for policing is a consequence of earlier systems developed for use in the wars in Iraq and Afghanistan. Notably, they are modeled after Project Maven (famous for its partial cancellation in the wake of Google employee protests in 2018), which was designed to map “terrorist” networks through surveillance and social network mapping and use older AI technologies to detect military targets of interest through video surveillance.
Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process.
The most recent escalation in military AI has been through the Israeli military’s use of automated systems in the Occupied Palestinian Territories. In 2023, Amnesty International reported on the Israeli military implementing automated apartheid in the Occupied Territories via a system called Red Wolf (formerly Blue Wolf). This system used CCTV cameras and soldiers carrying smart devices to build massive biometric and facial scan databases on every Palestinian, subsequently feeding data into a program of movement and rights restrictions. In late 2023 and early 2024, +972 released reports of AI systems used by the Israeli military to target civilians and their families during the early months of the genocide.
The Israeli military attempted to cloak these systems in rhetoric of “precision” and “intelligence,” as well as hunting “Hamas terrorist[s]” who “conduct combat from within ostensibly civilian buildings.” They insisted that these systems allowed them to find and target Hamas terrorists and distinguish from civilians, a point they have stuck to even despite being brought to the highest courts in the world for claims of genocidal intent. Yet the same +972 reports detail, via statements from Israeli soldiers and engineers, that these systems were in fact incapable of distinguishing enemy combatants from civilians (or even ignored in cases they did somewhat distinguish) and led to mass death of the noncombatant population.
As Big Tech and military partnerships continue, and the Trump administration increases its authoritarian projects at home, it is prudent to worry about development and deployment of systems similar to Red and Blue Wolf for control of the population. AI systems are already being used for policing universities, immigrants, and those speaking out against the genocide in Palestine. It would not be far-fetched to imagine the biometric databases being developed by Big Tech to be used for policing, with police and paramilitary even surveilling via smart devices, as Israeli soldiers do, and using AI models engage in mass surveillance and generate targets for repression.
It is also likely that the Trump administration would use a similar logic of precision and smart-targeting while engaging in these authoritarian acts. We must be clear that even in the best case, AI models are deeply biased (as in the examples of facial recognition systems used in policing generating false suspects and failing to detect faces of those with dark skin) and imprecise. Taking a more realistic view, it is likely that systems would be used in a far worse manner, intentionally generating targets for repression with purposely flawed definitions of “security threats” or “domestic terrorists.”
The fundamental projects of US militarism and repression of dissent are illegitimate even before considering the AI dimension. Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process. This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad. We must double efforts to reign in Big Tech, through building worker power and disrupting recruitment pipelines, before AI-powered militarization becomes too entrenched.