

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A number of human rights and humanitarian organizations have even launched the Campaign to Stop Killer Robots with the goal of adopting an international ban on the development and deployment of fully autonomous weapons systems. (Photo: Sharron Ward/Campaign to Stop Killer Robots/Facebook)
There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry. Among the measures he and other global leaders adopted were guidelines requiring that senior officials, not just military personnel, have a role in any nuclear-launch decision.
In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
That was then, of course, and this is now. And what a now it is! With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives, the role of humans, even in nuclear decision-making, is likely to be progressively diminished. In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
This isn't idle conjecture based on science fiction movies or dystopian novels. It's all too real, all too here and now, or at least here and soon to be. As the Pentagon and the military commands of the other great powers look to the future, what they see is a highly contested battlefield -- some have called it a "hyperwar" environment -- where vast swarms of AI-guided robotic weapons will fight each other at speeds far exceeding the ability of human commanders to follow the course of a battle. At such a time, it is thought, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where. At first, this may not extend to nuclear weapons, but as the speed of battle increases and the "firebreak" between them and conventional weaponry shrinks, it may prove impossible to prevent the creeping automatization of even nuclear-launch decision-making.
Such an outcome can only grow more likely as the U.S. military completes a top-to-bottom realignment intended to transform it from a fundamentally small-war, counter-terrorist organization back into one focused on peer-against-peer combat with China and Russia. This shift was mandated by the Department of Defense in its December 2017 National Security Strategy. Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.
"The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war," observed retired Marine General John Allen and AI entrepreneur Amir Hussain. "In this coming age of hyperwar, we will see humans providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input."
That "minimization of human decision-making" will have profound implications for the future of combat. Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster. Machines, even very smart machines, are unlikely to be capable of assessing the social and political context of combat, so activating them might well lead to situations of uncontrolled escalation.
It may be years, possibly decades, before machines replace humans in critical military decision-making roles, but that time is on the horizon. When it comes to controlling AI-enabled weapons systems, as Secretary of Defense Jim Mattis put it in a recent interview, "For the near future, there's going to be a significant human element. Maybe for 10 years, maybe for 15. But not for 100."
Why AI?
Even five years ago, there were few in the military establishment who gave much thought to the role of AI or robotics when it came to major combat operations. Yes, remotely piloted aircraft (RPA), or drones, have been widely used in Africa and the Greater Middle East to hunt down enemy combatants, but those are largely ancillary (and sometimes CIA) operations, intended to relieve pressure on U.S. commandos and allied forces facing scattered bands of violent extremists. In addition, today's RPAs are still controlled by human operators, even if from remote locations, and make little use, as yet, of AI-powered target-identification and attack systems. In the future, however, such systems are expected to populate much of any battlespace, replacing humans in many or even most combat functions.
To speed this transformation, the Department of Defense is already spending hundreds of millions of dollars on AI-related research. "We cannot expect success fighting tomorrow's conflicts with yesterday's thinking, weapons, or equipment," Mattis told Congress in April. To ensure continued military supremacy, he added, the Pentagon would have to focus more "investment in technological innovation to increase lethality, including research into advanced autonomous systems, artificial intelligence, and hypersonics."
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community--much of it based in Silicon Valley, California--in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition.
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community -- much of it based in Silicon Valley, California -- in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition. One of those applications, Alexa Voice Services, is the computer system behind Amazon's smart speaker that not only can use the Internet to do your bidding but interpret your commands. ("Alexa, play classical music." "Alexa, tell me today's weather." "Alexa, turn the lights on.") Another is the kind of self-driving vehicle technology that is expected to revolutionize transportation.
Artificial Intelligence is an "omni-use" technology, explain analysts at the Congressional Research Service, a non-partisan information agency, "as it has the potential to be integrated into virtually everything." It's also a "dual-use" technology in that it can be applied as aptly to military as civilian purposes. Self-driving cars, for instance, rely on specialized algorithms to process data from an array of sensors monitoring traffic conditions and so decide which routes to take, when to change lanes, and so on. The same technology and reconfigured versions of the same algorithms will one day be applied to self-driving tanks set loose on future battlefields. Similarly, someday drone aircraft -- without human operators in distant locales -- will be capable of scouring a battlefield for designated targets (tanks, radar systems, combatants), determining that something it "sees" is indeed on its target list, and "deciding" to launch a missile at it.
It doesn't take a particularly nimble brain to realize why Pentagon officials would seek to harness such technology: they think it will give them a significant advantage in future wars. Any full-scale conflict between the U.S. and China or Russia (or both) would, to say the least, be extraordinarily violent, with possibly hundreds of warships and many thousands of aircraft and armored vehicles all focused in densely packed battlespaces. In such an environment, speed in decision-making, deployment, and engagement will undoubtedly prove a critical asset. Given future super-smart, precision-guided weaponry, whoever fires first will have a better chance of success, or even survival, than a slower-firing adversary. Humans can move swiftly in such situations when forced to do so, but future machines will act far more swiftly, while keeping track of more battlefield variables.
As General Paul Selva, vice chairman of the Joint Chiefs of Staff, told Congress in 2017,
"It is very compelling when one looks at the capabilities that artificial intelligence can bring to the speed and accuracy of command and control and the capabilities that advanced robotics might bring to a complex battlespace, particularly machine-to-machine interaction in space and cyberspace, where speed is of the essence."
Aside from aiming to exploit AI in the development of its own weaponry, U.S. military officials are intensely aware that their principal adversaries are also pushing ahead in the weaponization of AI and robotics, seeking novel ways to overcome America's advantages in conventional weaponry. According to the Congressional Research Service, for instance, China is investing heavily in the development of artificial intelligence and its application to military purposes. Though lacking the tech base of either China or the United States, Russia is similarly rushing the development of AI and robotics. Any significant Chinese or Russian lead in such emerging technologies that might threaten this country's military superiority would be intolerable to the Pentagon.
Not surprisingly then, in the fashion of past arms races (from the pre-World War I development of battleships to Cold War nuclear weaponry), an "arms race in AI" is now underway, with the U.S., China, Russia, and other nations (including Britain, Israel, and South Korea) seeking to gain a critical advantage in the weaponization of artificial intelligence and robotics. Pentagon officials regularly cite Chinese advances in AI when seeking congressional funding for their projects, just as Chinese and Russian military officials undoubtedly cite American ones to fund their own pet projects. In true arms race fashion, this dynamic is already accelerating the pace of development and deployment of AI-empowered systems and ensuring their future prominence in warfare.
Command and Control
As this arms race unfolds, artificial intelligence will be applied to every aspect of warfare, from logistics and surveillance to target identification and battle management. Robotic vehicles will accompany troops on the battlefield, carrying supplies and firing on enemy positions; swarms of armed drones will attack enemy tanks, radars, and command centers; unmanned undersea vehicles, or UUVs, will pursue both enemy submarines and surface ships. At the outset of combat, all these instruments of war will undoubtedly be controlled by humans. As the fighting intensifies, however, communications between headquarters and the front lines may well be lost and such systems will, according to military scenarios already being written, be on their own, empowered to take lethal action without further human intervention.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons--sometimes called "killer robots"--with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons -- sometimes called "killer robots" -- with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law. Such statutes require that war-makers be able to distinguish between combatants and civilians on the battlefield and spare the latter from harm to the greatest extent possible. Advocates of the new technology claim that machines will indeed become smart enough to sort out such distinctions for themselves, while opponents insist that they will never prove capable of making critical distinctions of that sort in the heat of battle and would be unable to show compassion when appropriate. A number of human rights and humanitarian organizations have even launched the Campaign to Stop Killer Robots with the goal of adopting an international ban on the development and deployment of fully autonomous weapons systems.
In the meantime, a perhaps even more consequential debate is emerging in the military realm over the application of AI to command-and-control (C2) systems -- that is, to ways senior officers will communicate key orders to their troops. Generals and admirals always seek to maximize the reliability of C2 systems to ensure that their strategic intentions will be fulfilled as thoroughly as possible. In the current era, such systems are deeply reliant on secure radio and satellite communications systems that extend from headquarters to the front lines. However, strategists worry that, in a future hyperwar environment, such systems could be jammed or degraded just as the speed of the fighting begins to exceed the ability of commanders to receive battlefield reports, process the data, and dispatch timely orders. Consider this a functional definition of the infamous fog of war multiplied by artificial intelligence -- with defeat a likely outcome. The answer to such a dilemma for many military officials: let the machines take over these systems, too. As a report from the Congressional Research Service puts it, in the future "AI algorithms may provide commanders with viable courses of action based on real-time analysis of the battle-space, which would enable faster adaptation to unfolding events."
And someday, of course, it's possible to imagine that the minds behind such decision-making would cease to be human ones. Incoming data from battlefield information systems would instead be channeled to AI processors focused on assessing imminent threats and, given the time constraints involved, executing what they deemed the best options without human instructions.
Pentagon officials deny that any of this is the intent of their AI-related research. They acknowledge, however, that they can at least imagine a future in which other countries delegate decision-making to machines and the U.S. sees no choice but to follow suit, lest it lose the strategic high ground. "We will not delegate lethal authority for a machine to make a decision," then-Deputy Secretary of Defense Robert Work told Paul Scharre of the Center for a New American Security in a 2016 interview. But he added the usual caveat: in the future, "we might be going up against a competitor that is more willing to delegate authority to machines than we are and as that competition unfolds, we'll have to make decisions about how to compete."
The Doomsday Decision
The assumption in most of these scenarios is that the U.S. and its allies will be engaged in a conventional war with China and/or Russia. Keep in mind, then, that the very nature of such a future AI-driven hyperwar will only increase the risk that conventional conflicts could cross a threshold that's never been crossed before: an actual nuclear war between two nuclear states. And should that happen, those AI-empowered C2 systems could, sooner or later, find themselves in a position to launch atomic weapons.
The question then arises: Would machines make better decisions than humans in such a situation?
Such a danger arises from the convergence of multiple advances in technology: not just AI and robotics, but the development of conventional strike capabilities like hypersonic missiles capable of flying at five or more times the speed of sound, electromagnetic rail guns, and high-energy lasers. Such weaponry, though non-nuclear, when combined with AI surveillance and target-identification systems, could even attack an enemy's mobile retaliatory weapons and so threaten to eliminate its ability to launch a response to any nuclear attack. Given such a "use 'em or lose 'em" scenario, any power might be inclined not to wait but to launch its nukes at the first sign of possible attack, or even, fearing loss of control in an uncertain, fast-paced engagement, delegate launch authority to its machines. And once that occurred, it could prove almost impossible to prevent further escalation.
The question then arises: Would machines make better decisions than humans in such a situation? They certainly are capable of processing vast amounts of information over brief periods of time and weighing the pros and cons of alternative actions in a thoroughly unemotional manner. But machines also make military mistakes and, above all, they lack the ability to reflect on a situation and conclude: Stop this madness. No battle advantage is worth global human annihilation.
As Paul Scharre put it in Army of None, a new book on AI and warfare, "Humans are not perfect, but they can empathize with their opponents and see the bigger picture. Unlike humans, autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war."
So maybe we should think twice about giving some future militarized version of Alexa the power to launch a machine-made Armageddon.
Donald Trump’s attacks on democracy, justice, and a free press are escalating — putting everything we stand for at risk. We believe a better world is possible, but we can’t get there without your support. Common Dreams stands apart. We answer only to you — our readers, activists, and changemakers — not to billionaires or corporations. Our independence allows us to cover the vital stories that others won’t, spotlighting movements for peace, equality, and human rights. Right now, our work faces unprecedented challenges. Misinformation is spreading, journalists are under attack, and financial pressures are mounting. As a reader-supported, nonprofit newsroom, your support is crucial to keep this journalism alive. Whatever you can give — $10, $25, or $100 — helps us stay strong and responsive when the world needs us most. Together, we’ll continue to build the independent, courageous journalism our movement relies on. Thank you for being part of this community. |
There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry. Among the measures he and other global leaders adopted were guidelines requiring that senior officials, not just military personnel, have a role in any nuclear-launch decision.
In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
That was then, of course, and this is now. And what a now it is! With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives, the role of humans, even in nuclear decision-making, is likely to be progressively diminished. In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
This isn't idle conjecture based on science fiction movies or dystopian novels. It's all too real, all too here and now, or at least here and soon to be. As the Pentagon and the military commands of the other great powers look to the future, what they see is a highly contested battlefield -- some have called it a "hyperwar" environment -- where vast swarms of AI-guided robotic weapons will fight each other at speeds far exceeding the ability of human commanders to follow the course of a battle. At such a time, it is thought, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where. At first, this may not extend to nuclear weapons, but as the speed of battle increases and the "firebreak" between them and conventional weaponry shrinks, it may prove impossible to prevent the creeping automatization of even nuclear-launch decision-making.
Such an outcome can only grow more likely as the U.S. military completes a top-to-bottom realignment intended to transform it from a fundamentally small-war, counter-terrorist organization back into one focused on peer-against-peer combat with China and Russia. This shift was mandated by the Department of Defense in its December 2017 National Security Strategy. Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.
"The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war," observed retired Marine General John Allen and AI entrepreneur Amir Hussain. "In this coming age of hyperwar, we will see humans providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input."
That "minimization of human decision-making" will have profound implications for the future of combat. Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster. Machines, even very smart machines, are unlikely to be capable of assessing the social and political context of combat, so activating them might well lead to situations of uncontrolled escalation.
It may be years, possibly decades, before machines replace humans in critical military decision-making roles, but that time is on the horizon. When it comes to controlling AI-enabled weapons systems, as Secretary of Defense Jim Mattis put it in a recent interview, "For the near future, there's going to be a significant human element. Maybe for 10 years, maybe for 15. But not for 100."
Why AI?
Even five years ago, there were few in the military establishment who gave much thought to the role of AI or robotics when it came to major combat operations. Yes, remotely piloted aircraft (RPA), or drones, have been widely used in Africa and the Greater Middle East to hunt down enemy combatants, but those are largely ancillary (and sometimes CIA) operations, intended to relieve pressure on U.S. commandos and allied forces facing scattered bands of violent extremists. In addition, today's RPAs are still controlled by human operators, even if from remote locations, and make little use, as yet, of AI-powered target-identification and attack systems. In the future, however, such systems are expected to populate much of any battlespace, replacing humans in many or even most combat functions.
To speed this transformation, the Department of Defense is already spending hundreds of millions of dollars on AI-related research. "We cannot expect success fighting tomorrow's conflicts with yesterday's thinking, weapons, or equipment," Mattis told Congress in April. To ensure continued military supremacy, he added, the Pentagon would have to focus more "investment in technological innovation to increase lethality, including research into advanced autonomous systems, artificial intelligence, and hypersonics."
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community--much of it based in Silicon Valley, California--in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition.
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community -- much of it based in Silicon Valley, California -- in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition. One of those applications, Alexa Voice Services, is the computer system behind Amazon's smart speaker that not only can use the Internet to do your bidding but interpret your commands. ("Alexa, play classical music." "Alexa, tell me today's weather." "Alexa, turn the lights on.") Another is the kind of self-driving vehicle technology that is expected to revolutionize transportation.
Artificial Intelligence is an "omni-use" technology, explain analysts at the Congressional Research Service, a non-partisan information agency, "as it has the potential to be integrated into virtually everything." It's also a "dual-use" technology in that it can be applied as aptly to military as civilian purposes. Self-driving cars, for instance, rely on specialized algorithms to process data from an array of sensors monitoring traffic conditions and so decide which routes to take, when to change lanes, and so on. The same technology and reconfigured versions of the same algorithms will one day be applied to self-driving tanks set loose on future battlefields. Similarly, someday drone aircraft -- without human operators in distant locales -- will be capable of scouring a battlefield for designated targets (tanks, radar systems, combatants), determining that something it "sees" is indeed on its target list, and "deciding" to launch a missile at it.
It doesn't take a particularly nimble brain to realize why Pentagon officials would seek to harness such technology: they think it will give them a significant advantage in future wars. Any full-scale conflict between the U.S. and China or Russia (or both) would, to say the least, be extraordinarily violent, with possibly hundreds of warships and many thousands of aircraft and armored vehicles all focused in densely packed battlespaces. In such an environment, speed in decision-making, deployment, and engagement will undoubtedly prove a critical asset. Given future super-smart, precision-guided weaponry, whoever fires first will have a better chance of success, or even survival, than a slower-firing adversary. Humans can move swiftly in such situations when forced to do so, but future machines will act far more swiftly, while keeping track of more battlefield variables.
As General Paul Selva, vice chairman of the Joint Chiefs of Staff, told Congress in 2017,
"It is very compelling when one looks at the capabilities that artificial intelligence can bring to the speed and accuracy of command and control and the capabilities that advanced robotics might bring to a complex battlespace, particularly machine-to-machine interaction in space and cyberspace, where speed is of the essence."
Aside from aiming to exploit AI in the development of its own weaponry, U.S. military officials are intensely aware that their principal adversaries are also pushing ahead in the weaponization of AI and robotics, seeking novel ways to overcome America's advantages in conventional weaponry. According to the Congressional Research Service, for instance, China is investing heavily in the development of artificial intelligence and its application to military purposes. Though lacking the tech base of either China or the United States, Russia is similarly rushing the development of AI and robotics. Any significant Chinese or Russian lead in such emerging technologies that might threaten this country's military superiority would be intolerable to the Pentagon.
Not surprisingly then, in the fashion of past arms races (from the pre-World War I development of battleships to Cold War nuclear weaponry), an "arms race in AI" is now underway, with the U.S., China, Russia, and other nations (including Britain, Israel, and South Korea) seeking to gain a critical advantage in the weaponization of artificial intelligence and robotics. Pentagon officials regularly cite Chinese advances in AI when seeking congressional funding for their projects, just as Chinese and Russian military officials undoubtedly cite American ones to fund their own pet projects. In true arms race fashion, this dynamic is already accelerating the pace of development and deployment of AI-empowered systems and ensuring their future prominence in warfare.
Command and Control
As this arms race unfolds, artificial intelligence will be applied to every aspect of warfare, from logistics and surveillance to target identification and battle management. Robotic vehicles will accompany troops on the battlefield, carrying supplies and firing on enemy positions; swarms of armed drones will attack enemy tanks, radars, and command centers; unmanned undersea vehicles, or UUVs, will pursue both enemy submarines and surface ships. At the outset of combat, all these instruments of war will undoubtedly be controlled by humans. As the fighting intensifies, however, communications between headquarters and the front lines may well be lost and such systems will, according to military scenarios already being written, be on their own, empowered to take lethal action without further human intervention.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons--sometimes called "killer robots"--with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons -- sometimes called "killer robots" -- with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law. Such statutes require that war-makers be able to distinguish between combatants and civilians on the battlefield and spare the latter from harm to the greatest extent possible. Advocates of the new technology claim that machines will indeed become smart enough to sort out such distinctions for themselves, while opponents insist that they will never prove capable of making critical distinctions of that sort in the heat of battle and would be unable to show compassion when appropriate. A number of human rights and humanitarian organizations have even launched the Campaign to Stop Killer Robots with the goal of adopting an international ban on the development and deployment of fully autonomous weapons systems.
In the meantime, a perhaps even more consequential debate is emerging in the military realm over the application of AI to command-and-control (C2) systems -- that is, to ways senior officers will communicate key orders to their troops. Generals and admirals always seek to maximize the reliability of C2 systems to ensure that their strategic intentions will be fulfilled as thoroughly as possible. In the current era, such systems are deeply reliant on secure radio and satellite communications systems that extend from headquarters to the front lines. However, strategists worry that, in a future hyperwar environment, such systems could be jammed or degraded just as the speed of the fighting begins to exceed the ability of commanders to receive battlefield reports, process the data, and dispatch timely orders. Consider this a functional definition of the infamous fog of war multiplied by artificial intelligence -- with defeat a likely outcome. The answer to such a dilemma for many military officials: let the machines take over these systems, too. As a report from the Congressional Research Service puts it, in the future "AI algorithms may provide commanders with viable courses of action based on real-time analysis of the battle-space, which would enable faster adaptation to unfolding events."
And someday, of course, it's possible to imagine that the minds behind such decision-making would cease to be human ones. Incoming data from battlefield information systems would instead be channeled to AI processors focused on assessing imminent threats and, given the time constraints involved, executing what they deemed the best options without human instructions.
Pentagon officials deny that any of this is the intent of their AI-related research. They acknowledge, however, that they can at least imagine a future in which other countries delegate decision-making to machines and the U.S. sees no choice but to follow suit, lest it lose the strategic high ground. "We will not delegate lethal authority for a machine to make a decision," then-Deputy Secretary of Defense Robert Work told Paul Scharre of the Center for a New American Security in a 2016 interview. But he added the usual caveat: in the future, "we might be going up against a competitor that is more willing to delegate authority to machines than we are and as that competition unfolds, we'll have to make decisions about how to compete."
The Doomsday Decision
The assumption in most of these scenarios is that the U.S. and its allies will be engaged in a conventional war with China and/or Russia. Keep in mind, then, that the very nature of such a future AI-driven hyperwar will only increase the risk that conventional conflicts could cross a threshold that's never been crossed before: an actual nuclear war between two nuclear states. And should that happen, those AI-empowered C2 systems could, sooner or later, find themselves in a position to launch atomic weapons.
The question then arises: Would machines make better decisions than humans in such a situation?
Such a danger arises from the convergence of multiple advances in technology: not just AI and robotics, but the development of conventional strike capabilities like hypersonic missiles capable of flying at five or more times the speed of sound, electromagnetic rail guns, and high-energy lasers. Such weaponry, though non-nuclear, when combined with AI surveillance and target-identification systems, could even attack an enemy's mobile retaliatory weapons and so threaten to eliminate its ability to launch a response to any nuclear attack. Given such a "use 'em or lose 'em" scenario, any power might be inclined not to wait but to launch its nukes at the first sign of possible attack, or even, fearing loss of control in an uncertain, fast-paced engagement, delegate launch authority to its machines. And once that occurred, it could prove almost impossible to prevent further escalation.
The question then arises: Would machines make better decisions than humans in such a situation? They certainly are capable of processing vast amounts of information over brief periods of time and weighing the pros and cons of alternative actions in a thoroughly unemotional manner. But machines also make military mistakes and, above all, they lack the ability to reflect on a situation and conclude: Stop this madness. No battle advantage is worth global human annihilation.
As Paul Scharre put it in Army of None, a new book on AI and warfare, "Humans are not perfect, but they can empathize with their opponents and see the bigger picture. Unlike humans, autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war."
So maybe we should think twice about giving some future militarized version of Alexa the power to launch a machine-made Armageddon.
There could be no more consequential decision than launching atomic weapons and possibly triggering a nuclear holocaust. President John F. Kennedy faced just such a moment during the Cuban Missile Crisis of 1962 and, after envisioning the catastrophic outcome of a U.S.-Soviet nuclear exchange, he came to the conclusion that the atomic powers should impose tough barriers on the precipitous use of such weaponry. Among the measures he and other global leaders adopted were guidelines requiring that senior officials, not just military personnel, have a role in any nuclear-launch decision.
In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
That was then, of course, and this is now. And what a now it is! With artificial intelligence, or AI, soon to play an ever-increasing role in military affairs, as in virtually everything else in our lives, the role of humans, even in nuclear decision-making, is likely to be progressively diminished. In fact, in some future AI-saturated world, it could disappear entirely, leaving machines to determine humanity's fate.
This isn't idle conjecture based on science fiction movies or dystopian novels. It's all too real, all too here and now, or at least here and soon to be. As the Pentagon and the military commands of the other great powers look to the future, what they see is a highly contested battlefield -- some have called it a "hyperwar" environment -- where vast swarms of AI-guided robotic weapons will fight each other at speeds far exceeding the ability of human commanders to follow the course of a battle. At such a time, it is thought, commanders might increasingly be forced to rely on ever more intelligent machines to make decisions on what weaponry to employ when and where. At first, this may not extend to nuclear weapons, but as the speed of battle increases and the "firebreak" between them and conventional weaponry shrinks, it may prove impossible to prevent the creeping automatization of even nuclear-launch decision-making.
Such an outcome can only grow more likely as the U.S. military completes a top-to-bottom realignment intended to transform it from a fundamentally small-war, counter-terrorist organization back into one focused on peer-against-peer combat with China and Russia. This shift was mandated by the Department of Defense in its December 2017 National Security Strategy. Rather than focusing mainly on weaponry and tactics aimed at combating poorly armed insurgents in never-ending small-scale conflicts, the American military is now being redesigned to fight increasingly well-equipped Chinese and Russian forces in multi-dimensional (air, sea, land, space, cyberspace) engagements involving multiple attack systems (tanks, planes, missiles, rockets) operating with minimal human oversight.
"The major effect/result of all these capabilities coming together will be an innovation warfare has never seen before: the minimization of human decision-making in the vast majority of processes traditionally required to wage war," observed retired Marine General John Allen and AI entrepreneur Amir Hussain. "In this coming age of hyperwar, we will see humans providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input."
That "minimization of human decision-making" will have profound implications for the future of combat. Ordinarily, national leaders seek to control the pace and direction of battle to ensure the best possible outcome, even if that means halting the fighting to avoid greater losses or prevent humanitarian disaster. Machines, even very smart machines, are unlikely to be capable of assessing the social and political context of combat, so activating them might well lead to situations of uncontrolled escalation.
It may be years, possibly decades, before machines replace humans in critical military decision-making roles, but that time is on the horizon. When it comes to controlling AI-enabled weapons systems, as Secretary of Defense Jim Mattis put it in a recent interview, "For the near future, there's going to be a significant human element. Maybe for 10 years, maybe for 15. But not for 100."
Why AI?
Even five years ago, there were few in the military establishment who gave much thought to the role of AI or robotics when it came to major combat operations. Yes, remotely piloted aircraft (RPA), or drones, have been widely used in Africa and the Greater Middle East to hunt down enemy combatants, but those are largely ancillary (and sometimes CIA) operations, intended to relieve pressure on U.S. commandos and allied forces facing scattered bands of violent extremists. In addition, today's RPAs are still controlled by human operators, even if from remote locations, and make little use, as yet, of AI-powered target-identification and attack systems. In the future, however, such systems are expected to populate much of any battlespace, replacing humans in many or even most combat functions.
To speed this transformation, the Department of Defense is already spending hundreds of millions of dollars on AI-related research. "We cannot expect success fighting tomorrow's conflicts with yesterday's thinking, weapons, or equipment," Mattis told Congress in April. To ensure continued military supremacy, he added, the Pentagon would have to focus more "investment in technological innovation to increase lethality, including research into advanced autonomous systems, artificial intelligence, and hypersonics."
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community--much of it based in Silicon Valley, California--in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition.
Why the sudden emphasis on AI and robotics? It begins, of course, with the astonishing progress made by the tech community -- much of it based in Silicon Valley, California -- in enhancing AI and applying it to a multitude of functions, including image identification and voice recognition. One of those applications, Alexa Voice Services, is the computer system behind Amazon's smart speaker that not only can use the Internet to do your bidding but interpret your commands. ("Alexa, play classical music." "Alexa, tell me today's weather." "Alexa, turn the lights on.") Another is the kind of self-driving vehicle technology that is expected to revolutionize transportation.
Artificial Intelligence is an "omni-use" technology, explain analysts at the Congressional Research Service, a non-partisan information agency, "as it has the potential to be integrated into virtually everything." It's also a "dual-use" technology in that it can be applied as aptly to military as civilian purposes. Self-driving cars, for instance, rely on specialized algorithms to process data from an array of sensors monitoring traffic conditions and so decide which routes to take, when to change lanes, and so on. The same technology and reconfigured versions of the same algorithms will one day be applied to self-driving tanks set loose on future battlefields. Similarly, someday drone aircraft -- without human operators in distant locales -- will be capable of scouring a battlefield for designated targets (tanks, radar systems, combatants), determining that something it "sees" is indeed on its target list, and "deciding" to launch a missile at it.
It doesn't take a particularly nimble brain to realize why Pentagon officials would seek to harness such technology: they think it will give them a significant advantage in future wars. Any full-scale conflict between the U.S. and China or Russia (or both) would, to say the least, be extraordinarily violent, with possibly hundreds of warships and many thousands of aircraft and armored vehicles all focused in densely packed battlespaces. In such an environment, speed in decision-making, deployment, and engagement will undoubtedly prove a critical asset. Given future super-smart, precision-guided weaponry, whoever fires first will have a better chance of success, or even survival, than a slower-firing adversary. Humans can move swiftly in such situations when forced to do so, but future machines will act far more swiftly, while keeping track of more battlefield variables.
As General Paul Selva, vice chairman of the Joint Chiefs of Staff, told Congress in 2017,
"It is very compelling when one looks at the capabilities that artificial intelligence can bring to the speed and accuracy of command and control and the capabilities that advanced robotics might bring to a complex battlespace, particularly machine-to-machine interaction in space and cyberspace, where speed is of the essence."
Aside from aiming to exploit AI in the development of its own weaponry, U.S. military officials are intensely aware that their principal adversaries are also pushing ahead in the weaponization of AI and robotics, seeking novel ways to overcome America's advantages in conventional weaponry. According to the Congressional Research Service, for instance, China is investing heavily in the development of artificial intelligence and its application to military purposes. Though lacking the tech base of either China or the United States, Russia is similarly rushing the development of AI and robotics. Any significant Chinese or Russian lead in such emerging technologies that might threaten this country's military superiority would be intolerable to the Pentagon.
Not surprisingly then, in the fashion of past arms races (from the pre-World War I development of battleships to Cold War nuclear weaponry), an "arms race in AI" is now underway, with the U.S., China, Russia, and other nations (including Britain, Israel, and South Korea) seeking to gain a critical advantage in the weaponization of artificial intelligence and robotics. Pentagon officials regularly cite Chinese advances in AI when seeking congressional funding for their projects, just as Chinese and Russian military officials undoubtedly cite American ones to fund their own pet projects. In true arms race fashion, this dynamic is already accelerating the pace of development and deployment of AI-empowered systems and ensuring their future prominence in warfare.
Command and Control
As this arms race unfolds, artificial intelligence will be applied to every aspect of warfare, from logistics and surveillance to target identification and battle management. Robotic vehicles will accompany troops on the battlefield, carrying supplies and firing on enemy positions; swarms of armed drones will attack enemy tanks, radars, and command centers; unmanned undersea vehicles, or UUVs, will pursue both enemy submarines and surface ships. At the outset of combat, all these instruments of war will undoubtedly be controlled by humans. As the fighting intensifies, however, communications between headquarters and the front lines may well be lost and such systems will, according to military scenarios already being written, be on their own, empowered to take lethal action without further human intervention.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons--sometimes called "killer robots"--with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law.
Most of the debate over the application of AI and its future battlefield autonomy has been focused on the morality of empowering fully autonomous weapons -- sometimes called "killer robots" -- with a capacity to make life-and-death decisions on their own, or on whether the use of such systems would violate the laws of war and international humanitarian law. Such statutes require that war-makers be able to distinguish between combatants and civilians on the battlefield and spare the latter from harm to the greatest extent possible. Advocates of the new technology claim that machines will indeed become smart enough to sort out such distinctions for themselves, while opponents insist that they will never prove capable of making critical distinctions of that sort in the heat of battle and would be unable to show compassion when appropriate. A number of human rights and humanitarian organizations have even launched the Campaign to Stop Killer Robots with the goal of adopting an international ban on the development and deployment of fully autonomous weapons systems.
In the meantime, a perhaps even more consequential debate is emerging in the military realm over the application of AI to command-and-control (C2) systems -- that is, to ways senior officers will communicate key orders to their troops. Generals and admirals always seek to maximize the reliability of C2 systems to ensure that their strategic intentions will be fulfilled as thoroughly as possible. In the current era, such systems are deeply reliant on secure radio and satellite communications systems that extend from headquarters to the front lines. However, strategists worry that, in a future hyperwar environment, such systems could be jammed or degraded just as the speed of the fighting begins to exceed the ability of commanders to receive battlefield reports, process the data, and dispatch timely orders. Consider this a functional definition of the infamous fog of war multiplied by artificial intelligence -- with defeat a likely outcome. The answer to such a dilemma for many military officials: let the machines take over these systems, too. As a report from the Congressional Research Service puts it, in the future "AI algorithms may provide commanders with viable courses of action based on real-time analysis of the battle-space, which would enable faster adaptation to unfolding events."
And someday, of course, it's possible to imagine that the minds behind such decision-making would cease to be human ones. Incoming data from battlefield information systems would instead be channeled to AI processors focused on assessing imminent threats and, given the time constraints involved, executing what they deemed the best options without human instructions.
Pentagon officials deny that any of this is the intent of their AI-related research. They acknowledge, however, that they can at least imagine a future in which other countries delegate decision-making to machines and the U.S. sees no choice but to follow suit, lest it lose the strategic high ground. "We will not delegate lethal authority for a machine to make a decision," then-Deputy Secretary of Defense Robert Work told Paul Scharre of the Center for a New American Security in a 2016 interview. But he added the usual caveat: in the future, "we might be going up against a competitor that is more willing to delegate authority to machines than we are and as that competition unfolds, we'll have to make decisions about how to compete."
The Doomsday Decision
The assumption in most of these scenarios is that the U.S. and its allies will be engaged in a conventional war with China and/or Russia. Keep in mind, then, that the very nature of such a future AI-driven hyperwar will only increase the risk that conventional conflicts could cross a threshold that's never been crossed before: an actual nuclear war between two nuclear states. And should that happen, those AI-empowered C2 systems could, sooner or later, find themselves in a position to launch atomic weapons.
The question then arises: Would machines make better decisions than humans in such a situation?
Such a danger arises from the convergence of multiple advances in technology: not just AI and robotics, but the development of conventional strike capabilities like hypersonic missiles capable of flying at five or more times the speed of sound, electromagnetic rail guns, and high-energy lasers. Such weaponry, though non-nuclear, when combined with AI surveillance and target-identification systems, could even attack an enemy's mobile retaliatory weapons and so threaten to eliminate its ability to launch a response to any nuclear attack. Given such a "use 'em or lose 'em" scenario, any power might be inclined not to wait but to launch its nukes at the first sign of possible attack, or even, fearing loss of control in an uncertain, fast-paced engagement, delegate launch authority to its machines. And once that occurred, it could prove almost impossible to prevent further escalation.
The question then arises: Would machines make better decisions than humans in such a situation? They certainly are capable of processing vast amounts of information over brief periods of time and weighing the pros and cons of alternative actions in a thoroughly unemotional manner. But machines also make military mistakes and, above all, they lack the ability to reflect on a situation and conclude: Stop this madness. No battle advantage is worth global human annihilation.
As Paul Scharre put it in Army of None, a new book on AI and warfare, "Humans are not perfect, but they can empathize with their opponents and see the bigger picture. Unlike humans, autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war."
So maybe we should think twice about giving some future militarized version of Alexa the power to launch a machine-made Armageddon.
The 16 groups urge the agency "to uphold its obligation to promote competition, localism, and diversity in the U.S. media."
A coalition of 16 civil liberties, press freedom, and labor groups this week urged U.S. President Donald Trump's administration to abandon any plans to loosen media ownership restrictions and warned against opening the floodgates to further corporate consolidation.
Public comments on the National Television Multiple Ownership Rule were due to the Federal Communications Commission by Monday—which is when the coalition wrote to the FCC about the 39% national audience reach cap for U.S. broadcast media conglomerates, and how more mergers could negatively impact "the independence of the nation's press and the vitality of its local journalism."
"In our experience, the past 30 years of media consolidation have not fostered a better environment for local news and information. The Telecommunications Act of 1996 radically changed the radio and television broadcasting marketplace, causing rapid consolidation of radio station ownership," the coalition detailed. "Since the 1996 act, lawmakers and regulators have further relaxed television ownership limits, spurring further waves of station consolidation, the full harms of which are being felt by local newsrooms and the communities they serve."
The coalition highlighted how this consolidation has spread "across the entire news media ecosystem, including newspapers, online news outlets, and even online platforms," and led to "newsroom layoffs and closures, and the related spread of 'news deserts' across the country."
"Over a similar period, the economic model for news production has been undercut by technology platforms owned by the likes of Alphabet, Amazon, and Meta, which have offered an advertising model for better targeting readers, listeners, and viewers, and attracted much of the advertising revenue that once funded local journalism," the coalition noted.
While "lobbyists working for large news media companies argue that further consolidation is the economic answer, giving them the size necessary to compete with Big Tech," the letter argues, "in fact, the opposite appears to be true."
We object."Handing even more control of the public airwaves to a handful of capitulating broadcast conglomerates undermines press freedom." - S. Derek TurnerOur statement: https://www.freepress.net/news/free-press-slams-trump-fccs-broadcast-ownership-proceeding-wildly-dangerous-democracy
[image or embed]
— Free Press (@freepress.bsky.social) August 5, 2025 at 12:58 PM
The letter points out that a recent analysis from Free Press—one of the groups that signed the letter—found a "pervasive pattern of editorial compromise and capitulation" at 35 of the largest media and tech companies in the United States, "as owners of massive media conglomerates seek to curry favor with political leadership."
That analysis—released last week alongside a Media Capitulation Index—makes clear that "the interests of wealthy media owners have become so inextricably entangled with government officials that they've limited their news operations' ability to act as checks against abuses of political power," according to the coalition.
In addition to warning about further consolidation and urging the FCC "to uphold its obligation to promote competition, localism, and diversity in the U.S. media," the coalition argued that the agency actually "lacks the authority to change the national audience reach cap," citing congressional action in 2004.
Along with Free Press co-CEO Craig Aaron, the letter is signed by leaders at Fairness and Accuracy in Reporting, National Association of Broadcast Employees and Technicians - Communications Workers of America, National Coalition Against Censorship, Local Independent Online News Publishers, Media Freedom Foundation, NewsGuild-CWA, Open Markets Institute, Park Center for Independent Media, Project Censored, Reporters Without Borders USA, Society of Professional Journalists, Tully Center for Free Speech, Whistleblower and Source Protection Program at ExposeFacts, and Writers Guild of America East and West.
Free Press also filed its own comments. In a related Tuesday statement, senior economic and policy adviser S. Derek Turner, who co-authored the filing, accused FCC Chair Brendan Carr of "placing a for-sale sign on the public airwaves and inviting media companies to monopolize the local news markets as long as they agree to display political fealty to Donald Trump and the MAGA movement."
"The price broadcast companies have to pay for consolidating further is bending the knee, and the line starts outside of the FCC chairman's office," said Turner. "Trump's autocratic demands seemingly have no bounds, and Carr apparently has no qualms about satisfying them. Carr's grossly partisan and deeply hypocritical water-carrying for Trump has already stained the agency, making it clear that this FCC is no longer independent, impartial, or fair."
"The war in Gaza is contrary to international law and is causing terrible suffering," said Norway's finance minister.
The Norwegian government may seek to divest its state investment fund from Israeli companies participating in the illegal occupation of the West Bank or the genocide in Gaza.
Norway's Government Pension Fund Global is worth $2 trillion and is considered the largest sovereign wealth fund in the world.
On Tuesday, following the latest reports on the "worsened situation" in Gaza—which includes mass starvation as a result of Israel's blockade of humanitarian aid—Norway's finance minister, Jens Stoltenberg, ordered the fund's ethics council to review the fund's investments in Israeli companies.
The fund came under renewed scrutiny from activists and trade unions this week after the Norwegian newspaper Aftenposten reported on the fund's investments in the Israeli company Bet Shemesh Engines Holdings, which maintains the engines of fighter jets and attack helicopters that have been used to carry out devastating attacks on Gaza.
Although Norway's center-left government had determined in November 2023 that Israel's warfare in the Gaza Strip was violating international law, it only continued to increase its shares in Bet Shemesh throughout 2024, resulting in more than $15 million invested—a 2.1% stake—in the company.
Norwegian Prime Minister Jonas Gahr Støre said he was "very concerned" by the report and ordered Stoltenberg to contact the country's central bank to investigate.
"The war in Gaza is contrary to international law and is causing terrible suffering, so it is understandable that questions are being raised about the fund's investments in Bet Shemesh Engines," Stoltenberg said.
Norway's sovereign wealth fund has been described by Amnesty International as "an international leader in the environmental, social, and governance investment field."
Its ethics policy has strict guidelines against investing in companies that cause "serious violations of fundamental ethical norms," including "systematic human rights violations" and "violations of the rights of individuals in situations of war or conflict."
Following these guidelines, it has divested from some companies involved in the illegal Israeli occupation of Palestine.
In 2009, it dropped Israel's largest arms company, Elbit Systems, due to its supplying of surveillance technology used to patrol the separation wall—commonly called the "apartheid wall"—fencing off the West Bank from Israel-proper.
And in 2024, following the International Court of Justice's advisory opinion that Israel was committing the crime of apartheid, it also cut off Bezeq, Israel's largest telecommunications company, which supplies telecommunications equipment to illegal West Bank settlements. It later did the same for the Israeli energy company Paz Retail and Energy Ltd.
However, as Amnesty described in May, the fund remains "invested in several companies listed in the U.N. database of businesses involved in the unlawful occupation of Palestine."
Last month, a report by Francesca Albanese, the U.N. special rapporteur on human rights in the occupied Palestinian territories, revealed that Norway's sovereign wealth fund had increased its investments in Israeli companies by 32% since October 2023.
Albanese found that 6.9% of its pension fund's total value was directed towards companies "involved in supporting or enabling egregious violations of international law in the occupied Palestinian territory."
In a letter to the Norwegian government sent in April, she listed dozens of investments: including Caterpillar, whose bulldozers have been used to destroy houses in the West Bank and attack Palestinians in Gaza; several Israeli banks that fund illegal settlements; and other military and technology firms like Hewlett-Packard and Motorola, whose technologies have been used for the purposes of surveillance and torture.
"I found Norwegian politicians, trade unions, media, and civil society to be generally more educated, aware, and principled about Palestine-Israel than many of their peers in Europe," Albanese wrote on X earlier this year. "That is why I can't believe the Norwegian Oil Fund and Pension Fund is still so involved in Israel's unlawful occupation. This must end, totally and unconditionally, like Israel's occupation itself—no more excuses."
"The immediate economic losses projected here are just the tip of the iceberg," explained the CEO of the NAFSA: Association of International Educators.
The number of international students enrolling at U.S. colleges looks set to plummet this fall, according scenario modeling by an organization that advocates on behalf of academic exchange worldwide.
Insider Higher Ed reported on Tuesday that new data from the group, NAFSA: Association of International Educators, has found that American colleges could lose up to 150,000 international students in the coming academic year, which would represent a decline of up to 40% in foreign enrollment. In fact, the projected drop in international students is so large that it could lead to a drop in overall enrollment of 15%.
NAFSA cited multiple factors leading to the projected decline in international students: a three-week period between late May and the middle of June where student visa interviews were suspended all together; limited appointments available for students in countries such as India, China, Nigeria, and Japan; and new visa restrictions on 19 different countries stemming from an executive order U.S. President Donald Trump signed in early June.
NAFSA projected that the consequences of losing 150,000 international students this fall would be grim not just for universities but also the American economy as a whole. In all, the association found that a drop in students of that magnitude "would deprive local economies of $7 billion in spending and more than 60,000 jobs."
Fanta Aw, the executive director of NAFSA, emphasized that the United States would suffer even greater long-term damage from its policies discouraging the enrollment of international students.
"The immediate economic losses projected here are just the tip of the iceberg," Aw explained. "International students drive innovation, advance America's global competitiveness, and create research and academic opportunities in our local colleges that will benefit our country for generations. For the United States to succeed in the global economy, we must keep our doors open to students from around the world."
Trump and his administration have been going to war with the American higher education system by withholding federal research funding from universities unless they agree to a list of demands such as eliminating diversity, equity, and inclusion programs, and reviewing their policies for accepting international students.
The administration has also cracked down on international students who are already in the U.S. and has detained them and threatened them with deportation for a wide range of purported offenses such as writing student newspaper editorials critical of the Israeli government, entering the country with undeclared frog embryos, and having a single decade-old marijuana possession charge.