SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
An image shows an abstract rendition of artificial intelligence.
This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad.
President Donald Trump has recently been touring around with an entourage of Big Tech CEOs, proselytizing their massive profits and future prospects of more gain via advances in AI. At a recent event at the White House, First Lady Melania Trump, who is chairing the Artificial Intelligence Education task force, claimed that “[t]he robots are here. Our future is no longer science fiction.”
While much focus has been given to AI in education and the workplace, less has found its way to militarized AI, despite its widespread usage. When thinking of military AI, it’s easy to conjure up images of Terminator, the Matrix, HAL from 2001: A Space Odyssey, or the “Entity” from the newest Mission Impossible. Doomsday scenarios where AI goes rogue or becomes the driver of a machine-driven war against humanity are common Hollywood stories. As Melania claimed, it’s easy to imagine that “the robots are here.”
But for now, these situations are far from reality. Despite the US military and Big Tech hyping up militarized AI—funding and promising autonomous weapons, drone swarms, precision warfare, and battles at hyperspeed—the truth is that the vision is way beyond the capabilities of current systems.
But that does not mean that militarized AI is not dangerous—quite the opposite. The present danger is that the US government is employing unregulated and untested AI systems to conduct mass surveillance, mass deportations, and targeted crackdowns on dissent. All the while, Big Tech is profiting enormously off of fantasy projects, sold on visions of autonomous warfare, and a desire for authoritarian control. The new AI-centered military-industrial complex is indeed a tremendous threat to democratic society.
US military plans for the modern AI wave go back to 2018 with the Department of Defense (DOD) Artificial Intelligence Strategy. This document set the tone for militarized AI strategy for the subsequent years to come, as well as the foundations for how to pursue it. The 2018 AI Strategy prioritizes a few key points: (1) AI supremacy is essential for national security, (2) AI supremacy is essential for preserving US market supremacy, (3) China and Russia are the main AI competitors threatening US AI supremacy, and (4) the US government must rapidly pursue strategic partnerships with industry and academia to develop and push AI to achieve the prior three goals.
Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states.
In the years following, the Army followed suit by releasing a 2019 Army Modernization Strategy, similarly uplifting Russia and China as main threats. Yet this report went further than the 2018 Strategy, arguing that China and Russia are developing AI-based armed forces, hypersonic missiles, robotics, and swarming technologies. In 2021, one final, albeit massive, AI document was published by the US government: the National Security Commission on AI (NSCAI) report. This temporary commission was headed by Eric Schmidt, the former CEO of Google, who has been deeply involved in AI and military projects since leaving the company. The NSCAI report introduced a new lens to the AI and military equation: focusing on AI enabling informational advantages on the battlefield including enhanced decision-making, cyber operations, information warfare, and constant monitoring of the battlefield.
True to the goals of the 2018 AI Strategy, the Pentagon has built lasting partnerships with Big Tech to research and develop militarized AI tools. Domestically, major technology companies like Google, Microsoft, Amazon, and Palantir have taken on a host of projects for the government to the tune of hundreds of millions, or sometimes billions, of dollars in contract fees. Crescendo, a research project jointly conducted by the Action Center on Race and the Economy (ACRE), MPower Change, and Little Sis, has calculated that Amazon has netted over $1 billion in DOD and $78 million in Department of Homeland Security (DHS) contracts, Microsoft $42 billion (DOD) and $226 million (DHS), and Google $16 million (DOD) and $2 million (DHS).
Moreover, Big Tech has also profited enormously from militarized AI developed for foreign nations, especially Israel. In 2021, Google was under fire for their new $1.2 billion-valued Project Nimbus, a system developed for Israel to use AI systems like object detection and emotion detection to enhance Israeli military operations in the Occupied Territories. Google and Amazon have continued work on Project Nimbus, despite continued protests. Recently, Microsoft also came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians.
These relationships have fundamentally changed the landscape of the military-industrial complex, adding in a new dimension of AI-powered systems. Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states. Without even considering the actual systems themselves, this dynamic is a dangerous escalation in the domination of tech companies over democratic society.
Despite the enormous funding given to Big Tech to develop militarized AI, the systems in reality are not in line with the most ambitious visions of the government. By and large, the systems developed for domestic use include projects to develop and store massive biometric databases of individuals living in the US, or strengthen immigration policy and deportation enforcement. Police departments across the US have been adopting facial recognition technologies for use in ordinary cases. AI systems have been deployed to surveil social media of international students to deport pro-Palestine activists. It was recently reported that Immigration and Customs Enforcement will be using Israeli spyware to enhance its deportation agenda.
For projects used abroad, it seems that the dominant systems are ones that process information for surveillance. Both Maven and Nimbus were designed to use AI for battlefield advantage through information, via mapping social networks or identifying objects that could be potential targets. Microsoft recently came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians. Palantir has also been in the spotlight for working on surveillance tools.
There is a significant mismatch between the hype featured in US AI plans and Big Tech rhetoric, and the actual uses we observe. In fact, dissatisfaction with this discrepancy appears to be simmering inside of the military itself. In October of 2024, Paul Lushenko, a US army lieutenant colonel and instructor at the US Army War College, and Keith Carter, an associate professor at the US Naval War College, wrote a piece for the Bulletin of the Atomic Scientists critiquing AI hype in the military. They argue that “tech industry figures have little to no operational experience… they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not the nature, of war.” They contest visions of autonomous weapons and AI-driven warfare, claiming that that “the current debate on military AI is largely driven by ‘tech bros’ and other entrepreneurs who stand to profit immensely from the military’s uptake of AI-enabled capabilities.”
Yet even if military applications of AI are not panning out, it does not mean that AI technologies are not being used for control and domination in dangerous ways. This point becomes especially clear if we move from analyzing military AI to militarization through AI. Jessica Katzenstein, in a report for Brown University’s Costs of War project, warns of increases in militarism broadly as a threat that is potentially more pervasive than weapons themselves. She defines militarism as “the use of military language, counterinsurgency tactics, the spread of police paramilitary units, and military-derived ideologies about legitimate and moral uses of violence.”
AI technologies that assist in surveillance, targeting protesters, and deporting immigrants are indeed escalations in US militarism. It seems that government and Big Tech have figured out that these applications are possible and also extremely profitable—a worrying development in the fight for democratic society. Every militarized AI project Big Tech develops contributes to justifications of violence and oppression, especially for those sympathetic to technology and AI culture.
As militarized AI continues to be funded and developed within strengthening government-Big Tech partnerships, we should focus dissent on the AI systems currently terrorizing society while keeping a vigilant eye on likely future escalations. Current militarized AI being used for policing is a consequence of earlier systems developed for use in the wars in Iraq and Afghanistan. Notably, they are modeled after Project Maven (famous for its partial cancellation in the wake of Google employee protests in 2018), which was designed to map “terrorist” networks through surveillance and social network mapping and use older AI technologies to detect military targets of interest through video surveillance.
Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process.
The most recent escalation in military AI has been through the Israeli military’s use of automated systems in the Occupied Palestinian Territories. In 2023, Amnesty International reported on the Israeli military implementing automated apartheid in the Occupied Territories via a system called Red Wolf (formerly Blue Wolf). This system used CCTV cameras and soldiers carrying smart devices to build massive biometric and facial scan databases on every Palestinian, subsequently feeding data into a program of movement and rights restrictions. In late 2023 and early 2024, +972 released reports of AI systems used by the Israeli military to target civilians and their families during the early months of the genocide.
The Israeli military attempted to cloak these systems in rhetoric of “precision” and “intelligence,” as well as hunting “Hamas terrorist[s]” who “conduct combat from within ostensibly civilian buildings.” They insisted that these systems allowed them to find and target Hamas terrorists and distinguish from civilians, a point they have stuck to even despite being brought to the highest courts in the world for claims of genocidal intent. Yet the same +972 reports detail, via statements from Israeli soldiers and engineers, that these systems were in fact incapable of distinguishing enemy combatants from civilians (or even ignored in cases they did somewhat distinguish) and led to mass death of the noncombatant population.
As Big Tech and military partnerships continue, and the Trump administration increases its authoritarian projects at home, it is prudent to worry about development and deployment of systems similar to Red and Blue Wolf for control of the population. AI systems are already being used for policing universities, immigrants, and those speaking out against the genocide in Palestine. It would not be far-fetched to imagine the biometric databases being developed by Big Tech to be used for policing, with police and paramilitary even surveilling via smart devices, as Israeli soldiers do, and using AI models engage in mass surveillance and generate targets for repression.
It is also likely that the Trump administration would use a similar logic of precision and smart-targeting while engaging in these authoritarian acts. We must be clear that even in the best case, AI models are deeply biased (as in the examples of facial recognition systems used in policing generating false suspects and failing to detect faces of those with dark skin) and imprecise. Taking a more realistic view, it is likely that systems would be used in a far worse manner, intentionally generating targets for repression with purposely flawed definitions of “security threats” or “domestic terrorists.”
The fundamental projects of US militarism and repression of dissent are illegitimate even before considering the AI dimension. Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process. This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad. We must double efforts to reign in Big Tech, through building worker power and disrupting recruitment pipelines, before AI-powered militarization becomes too entrenched.
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
President Donald Trump has recently been touring around with an entourage of Big Tech CEOs, proselytizing their massive profits and future prospects of more gain via advances in AI. At a recent event at the White House, First Lady Melania Trump, who is chairing the Artificial Intelligence Education task force, claimed that “[t]he robots are here. Our future is no longer science fiction.”
While much focus has been given to AI in education and the workplace, less has found its way to militarized AI, despite its widespread usage. When thinking of military AI, it’s easy to conjure up images of Terminator, the Matrix, HAL from 2001: A Space Odyssey, or the “Entity” from the newest Mission Impossible. Doomsday scenarios where AI goes rogue or becomes the driver of a machine-driven war against humanity are common Hollywood stories. As Melania claimed, it’s easy to imagine that “the robots are here.”
But for now, these situations are far from reality. Despite the US military and Big Tech hyping up militarized AI—funding and promising autonomous weapons, drone swarms, precision warfare, and battles at hyperspeed—the truth is that the vision is way beyond the capabilities of current systems.
But that does not mean that militarized AI is not dangerous—quite the opposite. The present danger is that the US government is employing unregulated and untested AI systems to conduct mass surveillance, mass deportations, and targeted crackdowns on dissent. All the while, Big Tech is profiting enormously off of fantasy projects, sold on visions of autonomous warfare, and a desire for authoritarian control. The new AI-centered military-industrial complex is indeed a tremendous threat to democratic society.
US military plans for the modern AI wave go back to 2018 with the Department of Defense (DOD) Artificial Intelligence Strategy. This document set the tone for militarized AI strategy for the subsequent years to come, as well as the foundations for how to pursue it. The 2018 AI Strategy prioritizes a few key points: (1) AI supremacy is essential for national security, (2) AI supremacy is essential for preserving US market supremacy, (3) China and Russia are the main AI competitors threatening US AI supremacy, and (4) the US government must rapidly pursue strategic partnerships with industry and academia to develop and push AI to achieve the prior three goals.
Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states.
In the years following, the Army followed suit by releasing a 2019 Army Modernization Strategy, similarly uplifting Russia and China as main threats. Yet this report went further than the 2018 Strategy, arguing that China and Russia are developing AI-based armed forces, hypersonic missiles, robotics, and swarming technologies. In 2021, one final, albeit massive, AI document was published by the US government: the National Security Commission on AI (NSCAI) report. This temporary commission was headed by Eric Schmidt, the former CEO of Google, who has been deeply involved in AI and military projects since leaving the company. The NSCAI report introduced a new lens to the AI and military equation: focusing on AI enabling informational advantages on the battlefield including enhanced decision-making, cyber operations, information warfare, and constant monitoring of the battlefield.
True to the goals of the 2018 AI Strategy, the Pentagon has built lasting partnerships with Big Tech to research and develop militarized AI tools. Domestically, major technology companies like Google, Microsoft, Amazon, and Palantir have taken on a host of projects for the government to the tune of hundreds of millions, or sometimes billions, of dollars in contract fees. Crescendo, a research project jointly conducted by the Action Center on Race and the Economy (ACRE), MPower Change, and Little Sis, has calculated that Amazon has netted over $1 billion in DOD and $78 million in Department of Homeland Security (DHS) contracts, Microsoft $42 billion (DOD) and $226 million (DHS), and Google $16 million (DOD) and $2 million (DHS).
Moreover, Big Tech has also profited enormously from militarized AI developed for foreign nations, especially Israel. In 2021, Google was under fire for their new $1.2 billion-valued Project Nimbus, a system developed for Israel to use AI systems like object detection and emotion detection to enhance Israeli military operations in the Occupied Territories. Google and Amazon have continued work on Project Nimbus, despite continued protests. Recently, Microsoft also came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians.
These relationships have fundamentally changed the landscape of the military-industrial complex, adding in a new dimension of AI-powered systems. Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states. Without even considering the actual systems themselves, this dynamic is a dangerous escalation in the domination of tech companies over democratic society.
Despite the enormous funding given to Big Tech to develop militarized AI, the systems in reality are not in line with the most ambitious visions of the government. By and large, the systems developed for domestic use include projects to develop and store massive biometric databases of individuals living in the US, or strengthen immigration policy and deportation enforcement. Police departments across the US have been adopting facial recognition technologies for use in ordinary cases. AI systems have been deployed to surveil social media of international students to deport pro-Palestine activists. It was recently reported that Immigration and Customs Enforcement will be using Israeli spyware to enhance its deportation agenda.
For projects used abroad, it seems that the dominant systems are ones that process information for surveillance. Both Maven and Nimbus were designed to use AI for battlefield advantage through information, via mapping social networks or identifying objects that could be potential targets. Microsoft recently came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians. Palantir has also been in the spotlight for working on surveillance tools.
There is a significant mismatch between the hype featured in US AI plans and Big Tech rhetoric, and the actual uses we observe. In fact, dissatisfaction with this discrepancy appears to be simmering inside of the military itself. In October of 2024, Paul Lushenko, a US army lieutenant colonel and instructor at the US Army War College, and Keith Carter, an associate professor at the US Naval War College, wrote a piece for the Bulletin of the Atomic Scientists critiquing AI hype in the military. They argue that “tech industry figures have little to no operational experience… they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not the nature, of war.” They contest visions of autonomous weapons and AI-driven warfare, claiming that that “the current debate on military AI is largely driven by ‘tech bros’ and other entrepreneurs who stand to profit immensely from the military’s uptake of AI-enabled capabilities.”
Yet even if military applications of AI are not panning out, it does not mean that AI technologies are not being used for control and domination in dangerous ways. This point becomes especially clear if we move from analyzing military AI to militarization through AI. Jessica Katzenstein, in a report for Brown University’s Costs of War project, warns of increases in militarism broadly as a threat that is potentially more pervasive than weapons themselves. She defines militarism as “the use of military language, counterinsurgency tactics, the spread of police paramilitary units, and military-derived ideologies about legitimate and moral uses of violence.”
AI technologies that assist in surveillance, targeting protesters, and deporting immigrants are indeed escalations in US militarism. It seems that government and Big Tech have figured out that these applications are possible and also extremely profitable—a worrying development in the fight for democratic society. Every militarized AI project Big Tech develops contributes to justifications of violence and oppression, especially for those sympathetic to technology and AI culture.
As militarized AI continues to be funded and developed within strengthening government-Big Tech partnerships, we should focus dissent on the AI systems currently terrorizing society while keeping a vigilant eye on likely future escalations. Current militarized AI being used for policing is a consequence of earlier systems developed for use in the wars in Iraq and Afghanistan. Notably, they are modeled after Project Maven (famous for its partial cancellation in the wake of Google employee protests in 2018), which was designed to map “terrorist” networks through surveillance and social network mapping and use older AI technologies to detect military targets of interest through video surveillance.
Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process.
The most recent escalation in military AI has been through the Israeli military’s use of automated systems in the Occupied Palestinian Territories. In 2023, Amnesty International reported on the Israeli military implementing automated apartheid in the Occupied Territories via a system called Red Wolf (formerly Blue Wolf). This system used CCTV cameras and soldiers carrying smart devices to build massive biometric and facial scan databases on every Palestinian, subsequently feeding data into a program of movement and rights restrictions. In late 2023 and early 2024, +972 released reports of AI systems used by the Israeli military to target civilians and their families during the early months of the genocide.
The Israeli military attempted to cloak these systems in rhetoric of “precision” and “intelligence,” as well as hunting “Hamas terrorist[s]” who “conduct combat from within ostensibly civilian buildings.” They insisted that these systems allowed them to find and target Hamas terrorists and distinguish from civilians, a point they have stuck to even despite being brought to the highest courts in the world for claims of genocidal intent. Yet the same +972 reports detail, via statements from Israeli soldiers and engineers, that these systems were in fact incapable of distinguishing enemy combatants from civilians (or even ignored in cases they did somewhat distinguish) and led to mass death of the noncombatant population.
As Big Tech and military partnerships continue, and the Trump administration increases its authoritarian projects at home, it is prudent to worry about development and deployment of systems similar to Red and Blue Wolf for control of the population. AI systems are already being used for policing universities, immigrants, and those speaking out against the genocide in Palestine. It would not be far-fetched to imagine the biometric databases being developed by Big Tech to be used for policing, with police and paramilitary even surveilling via smart devices, as Israeli soldiers do, and using AI models engage in mass surveillance and generate targets for repression.
It is also likely that the Trump administration would use a similar logic of precision and smart-targeting while engaging in these authoritarian acts. We must be clear that even in the best case, AI models are deeply biased (as in the examples of facial recognition systems used in policing generating false suspects and failing to detect faces of those with dark skin) and imprecise. Taking a more realistic view, it is likely that systems would be used in a far worse manner, intentionally generating targets for repression with purposely flawed definitions of “security threats” or “domestic terrorists.”
The fundamental projects of US militarism and repression of dissent are illegitimate even before considering the AI dimension. Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process. This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad. We must double efforts to reign in Big Tech, through building worker power and disrupting recruitment pipelines, before AI-powered militarization becomes too entrenched.
President Donald Trump has recently been touring around with an entourage of Big Tech CEOs, proselytizing their massive profits and future prospects of more gain via advances in AI. At a recent event at the White House, First Lady Melania Trump, who is chairing the Artificial Intelligence Education task force, claimed that “[t]he robots are here. Our future is no longer science fiction.”
While much focus has been given to AI in education and the workplace, less has found its way to militarized AI, despite its widespread usage. When thinking of military AI, it’s easy to conjure up images of Terminator, the Matrix, HAL from 2001: A Space Odyssey, or the “Entity” from the newest Mission Impossible. Doomsday scenarios where AI goes rogue or becomes the driver of a machine-driven war against humanity are common Hollywood stories. As Melania claimed, it’s easy to imagine that “the robots are here.”
But for now, these situations are far from reality. Despite the US military and Big Tech hyping up militarized AI—funding and promising autonomous weapons, drone swarms, precision warfare, and battles at hyperspeed—the truth is that the vision is way beyond the capabilities of current systems.
But that does not mean that militarized AI is not dangerous—quite the opposite. The present danger is that the US government is employing unregulated and untested AI systems to conduct mass surveillance, mass deportations, and targeted crackdowns on dissent. All the while, Big Tech is profiting enormously off of fantasy projects, sold on visions of autonomous warfare, and a desire for authoritarian control. The new AI-centered military-industrial complex is indeed a tremendous threat to democratic society.
US military plans for the modern AI wave go back to 2018 with the Department of Defense (DOD) Artificial Intelligence Strategy. This document set the tone for militarized AI strategy for the subsequent years to come, as well as the foundations for how to pursue it. The 2018 AI Strategy prioritizes a few key points: (1) AI supremacy is essential for national security, (2) AI supremacy is essential for preserving US market supremacy, (3) China and Russia are the main AI competitors threatening US AI supremacy, and (4) the US government must rapidly pursue strategic partnerships with industry and academia to develop and push AI to achieve the prior three goals.
Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states.
In the years following, the Army followed suit by releasing a 2019 Army Modernization Strategy, similarly uplifting Russia and China as main threats. Yet this report went further than the 2018 Strategy, arguing that China and Russia are developing AI-based armed forces, hypersonic missiles, robotics, and swarming technologies. In 2021, one final, albeit massive, AI document was published by the US government: the National Security Commission on AI (NSCAI) report. This temporary commission was headed by Eric Schmidt, the former CEO of Google, who has been deeply involved in AI and military projects since leaving the company. The NSCAI report introduced a new lens to the AI and military equation: focusing on AI enabling informational advantages on the battlefield including enhanced decision-making, cyber operations, information warfare, and constant monitoring of the battlefield.
True to the goals of the 2018 AI Strategy, the Pentagon has built lasting partnerships with Big Tech to research and develop militarized AI tools. Domestically, major technology companies like Google, Microsoft, Amazon, and Palantir have taken on a host of projects for the government to the tune of hundreds of millions, or sometimes billions, of dollars in contract fees. Crescendo, a research project jointly conducted by the Action Center on Race and the Economy (ACRE), MPower Change, and Little Sis, has calculated that Amazon has netted over $1 billion in DOD and $78 million in Department of Homeland Security (DHS) contracts, Microsoft $42 billion (DOD) and $226 million (DHS), and Google $16 million (DOD) and $2 million (DHS).
Moreover, Big Tech has also profited enormously from militarized AI developed for foreign nations, especially Israel. In 2021, Google was under fire for their new $1.2 billion-valued Project Nimbus, a system developed for Israel to use AI systems like object detection and emotion detection to enhance Israeli military operations in the Occupied Territories. Google and Amazon have continued work on Project Nimbus, despite continued protests. Recently, Microsoft also came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians.
These relationships have fundamentally changed the landscape of the military-industrial complex, adding in a new dimension of AI-powered systems. Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states. Without even considering the actual systems themselves, this dynamic is a dangerous escalation in the domination of tech companies over democratic society.
Despite the enormous funding given to Big Tech to develop militarized AI, the systems in reality are not in line with the most ambitious visions of the government. By and large, the systems developed for domestic use include projects to develop and store massive biometric databases of individuals living in the US, or strengthen immigration policy and deportation enforcement. Police departments across the US have been adopting facial recognition technologies for use in ordinary cases. AI systems have been deployed to surveil social media of international students to deport pro-Palestine activists. It was recently reported that Immigration and Customs Enforcement will be using Israeli spyware to enhance its deportation agenda.
For projects used abroad, it seems that the dominant systems are ones that process information for surveillance. Both Maven and Nimbus were designed to use AI for battlefield advantage through information, via mapping social networks or identifying objects that could be potential targets. Microsoft recently came under fire for reports that its Azure cloud service has been used to store data and surveil Palestinians. Palantir has also been in the spotlight for working on surveillance tools.
There is a significant mismatch between the hype featured in US AI plans and Big Tech rhetoric, and the actual uses we observe. In fact, dissatisfaction with this discrepancy appears to be simmering inside of the military itself. In October of 2024, Paul Lushenko, a US army lieutenant colonel and instructor at the US Army War College, and Keith Carter, an associate professor at the US Naval War College, wrote a piece for the Bulletin of the Atomic Scientists critiquing AI hype in the military. They argue that “tech industry figures have little to no operational experience… they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not the nature, of war.” They contest visions of autonomous weapons and AI-driven warfare, claiming that that “the current debate on military AI is largely driven by ‘tech bros’ and other entrepreneurs who stand to profit immensely from the military’s uptake of AI-enabled capabilities.”
Yet even if military applications of AI are not panning out, it does not mean that AI technologies are not being used for control and domination in dangerous ways. This point becomes especially clear if we move from analyzing military AI to militarization through AI. Jessica Katzenstein, in a report for Brown University’s Costs of War project, warns of increases in militarism broadly as a threat that is potentially more pervasive than weapons themselves. She defines militarism as “the use of military language, counterinsurgency tactics, the spread of police paramilitary units, and military-derived ideologies about legitimate and moral uses of violence.”
AI technologies that assist in surveillance, targeting protesters, and deporting immigrants are indeed escalations in US militarism. It seems that government and Big Tech have figured out that these applications are possible and also extremely profitable—a worrying development in the fight for democratic society. Every militarized AI project Big Tech develops contributes to justifications of violence and oppression, especially for those sympathetic to technology and AI culture.
As militarized AI continues to be funded and developed within strengthening government-Big Tech partnerships, we should focus dissent on the AI systems currently terrorizing society while keeping a vigilant eye on likely future escalations. Current militarized AI being used for policing is a consequence of earlier systems developed for use in the wars in Iraq and Afghanistan. Notably, they are modeled after Project Maven (famous for its partial cancellation in the wake of Google employee protests in 2018), which was designed to map “terrorist” networks through surveillance and social network mapping and use older AI technologies to detect military targets of interest through video surveillance.
Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process.
The most recent escalation in military AI has been through the Israeli military’s use of automated systems in the Occupied Palestinian Territories. In 2023, Amnesty International reported on the Israeli military implementing automated apartheid in the Occupied Territories via a system called Red Wolf (formerly Blue Wolf). This system used CCTV cameras and soldiers carrying smart devices to build massive biometric and facial scan databases on every Palestinian, subsequently feeding data into a program of movement and rights restrictions. In late 2023 and early 2024, +972 released reports of AI systems used by the Israeli military to target civilians and their families during the early months of the genocide.
The Israeli military attempted to cloak these systems in rhetoric of “precision” and “intelligence,” as well as hunting “Hamas terrorist[s]” who “conduct combat from within ostensibly civilian buildings.” They insisted that these systems allowed them to find and target Hamas terrorists and distinguish from civilians, a point they have stuck to even despite being brought to the highest courts in the world for claims of genocidal intent. Yet the same +972 reports detail, via statements from Israeli soldiers and engineers, that these systems were in fact incapable of distinguishing enemy combatants from civilians (or even ignored in cases they did somewhat distinguish) and led to mass death of the noncombatant population.
As Big Tech and military partnerships continue, and the Trump administration increases its authoritarian projects at home, it is prudent to worry about development and deployment of systems similar to Red and Blue Wolf for control of the population. AI systems are already being used for policing universities, immigrants, and those speaking out against the genocide in Palestine. It would not be far-fetched to imagine the biometric databases being developed by Big Tech to be used for policing, with police and paramilitary even surveilling via smart devices, as Israeli soldiers do, and using AI models engage in mass surveillance and generate targets for repression.
It is also likely that the Trump administration would use a similar logic of precision and smart-targeting while engaging in these authoritarian acts. We must be clear that even in the best case, AI models are deeply biased (as in the examples of facial recognition systems used in policing generating false suspects and failing to detect faces of those with dark skin) and imprecise. Taking a more realistic view, it is likely that systems would be used in a far worse manner, intentionally generating targets for repression with purposely flawed definitions of “security threats” or “domestic terrorists.”
The fundamental projects of US militarism and repression of dissent are illegitimate even before considering the AI dimension. Adding tech into the equation simply supercharges the capability of the government to police with impunity, as well as enriches and entrenches Big Tech in the process. This new AI-centric military-industrial complex threatens to become an unaccountable superpower wielding new levels of control at home and abroad. We must double efforts to reign in Big Tech, through building worker power and disrupting recruitment pipelines, before AI-powered militarization becomes too entrenched.