SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A screen shows a demonstration of the Anduril Lattice battlefield sofware during the Security Equipment International at London Excel on September 10, 2025 in London, England.
As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.
AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.
The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.
To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”
More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded.
The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”
If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:
Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”
Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Time magazine that government funding has been the main driver of AI development for many years.
This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.
The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:
The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.
Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.
Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:
In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…
Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.
The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.
That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.
Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.
The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.
To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”
More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded.
The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”
If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:
Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”
Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Time magazine that government funding has been the main driver of AI development for many years.
This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.
The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:
The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.
Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.
Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:
In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…
Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.
The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.
That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.
Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?
AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.
The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.
To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”
More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded.
The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”
If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:
Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”
Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Time magazine that government funding has been the main driver of AI development for many years.
This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.
The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:
The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.
Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.
Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:
In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…
Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.
The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.
That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.
No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.
Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?