

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
In the dark, in the silence, in a blink, the age
of the autonomous killer robot has arrived. It is happening. They are
deployed. And - at their current rate of acceleration - they will
become the dominant method of war for rich countries in the 21st
century. These facts sound, at first, preposterous. The idea of
machines that are designed to whirr out into the world and make their
own decisions to kill is an old sci-fi fantasy: picture a mechanical
Arnold Schwarzenegger blasting a truck and muttering: "Hasta la vista,
baby." But we live in a world of such whooshing technological
transformation that the concept has leaped in just five years from the
cinema screen to the battlefield - with barely anyone back home
noticing.
When the US invaded Iraq in 2003, they had no robots as part of their force.
By the end of 2005, they had 2,400. Today, they have 12,000, carrying out
33,000 missions a year. A report by the US Joint Forces Command says
autonomous robots will be the norm on the battlefield within 20 years.
The Nato forces now depend on a range of killer robots, largely designed by
the British Ministry of Defence labs privatised by Tony Blair in 2001. Every
time you hear about a "drone attack" against Afghanistan or
Pakistan, that's an unmanned robot dropping bombs on human beings. Push a
button and it flies away, kills, and comes home. Its robot-cousin on the
battlefields below is called SWORDS: a human-sized robot that can see 360
degrees around it and fire its machine-guns at any target it "chooses".
Fox News proudly calls it "the GI of the 21st century." And
billions are being spent on the next generation of warbots, which will leave
these models looking like the bulky box on which you used to play Pong.
At the moment, most are controlled by a soldier - often 7,500 miles away -
with a control panel. But insurgents are always inventing new ways to block
the signal from the control centre, which causes the robot to shut down and "die".
So the military is building "autonomy" into the robots: if they
lose contact, they start to make their own decisions, in line with a
pre-determined code.
This is "one of the most fundamental changes in the history of human
warfare," according to PW Singer, a former analyst for the Pentagon and
the CIA, in his must-read book, Wired For War: The Robotics Revolution and
Defence in the Twenty-First Century. Humans have been developing weapons
that enabled us to kill at ever-greater distances and in ever-greater
numbers for millennia, from the longbow to the cannon to the machine-gun to
the nuclear bomb. But these robots mark a different stage.
The earlier technologies made it possible for humans to decide to kill in more "sophisticated"
ways - but once you programme and unleash an autonomous robot, the war isn't
fought by you any more: it's fought by the machine. The subject of warfare
shifts.
The military claim this is a safer model of combat. Gordon Johnson of the
Pentagon's Joint Forces Command says of the warbots: "They're not
afraid. They don't forget their orders. They don't care if the guy next to
them has been shot. Will they do a better job than humans? Yes." Why
take a risk with your soldier's life, if he can stay in Arlington and kill
in Kandahar? Think of it as War 4.0.
But the evidence punctures this techno-optimism. We know the programming of
robots will regularly go wrong - because all technological programming
regularly goes wrong. Look at the place where robots are used most
frequently today: factories. Some 4 per cent of US factories have "major
robotics accidents" every year - a man having molten aluminium poured
over him, or a woman picked up and placed on a conveyor belt to be smashed
into the shape of a car. The former Japanese Prime Minister Junichiro
Koizumi was nearly killed a few years ago after a robot attacked him on a
tour of a factory. And remember: these are robots that aren't designed to
kill.
Think about how maddening it is to deal with a robot on the telephone when you
want to pay your phone bill. Now imagine that robot had a machine-gun
pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how
will they distinguish a combatant from a civilian? You can't appeal to a
robot for mercy; you can't activate its empathy. And afterwards, who do you
punish? Marc Garlasco, of Human Rights Watch, says: "War crimes need a
violation and an intent. A machine has no capacity to want to kill
civilians.... If they are incapable of intent, are they incapable of war
crimes?"
Robots do make war much easier - for the aggressor. You are taking much less
physical risk with your people, even as you kill more of theirs. One US
report recently claimed they will turn war into "an essentially
frictionless engineering exercise". As Larry Korb, Ronald Reagan's
assistant secretary of defence, put it: "It will make people think,
'Gee, warfare is easy.'"
If virtually no American forces had died in Vietnam, would the war have
stopped when it did - or would the systematic slaughter of the Vietnamese
people have continued for many more years? If "we" weren't losing
anyone in Afghanistan or Iraq, would the call for an end to the killing be
as loud? I'd like to think we are motivated primarily by compassion for
civilians on the other side, but I doubt it. Take "us" safely out
of the picture and we will be more willing to kill "them".
There is some evidence that warbots will also make us less inhibited in our
killing. When another human being is standing in front of you, when you can
stare into their eyes, it's hard to kill them. When they are half the world
away and little more than an avatar, it's easy. A young air force lieutenant
who fought through a warbot told Singer: "It's like a video game [with]
the ability to kill. It's like ... freaking cool."
When the US First Marine Expeditionary Force in Iraq was asked in 2006 what
kind of robotic support it needed, they said they had an "urgent
operational need" for a laser mounted on to an unmanned drone that
could cause "instantaneous burst-combustion of insurgent clothing, a
rapid death through violent trauma, and more probably a morbid combination
of both". The request said it should be like "long-range blow
torches or precision flame-throwers". They wanted to do with robots
things they would find almost unthinkable face-to-face.
While "we" will lose fewer people at first by fighting with warbots,
this way of fighting may well catalyse greater attacks on us in the long
run. US army staff sergeant Scott Smith boasts warbots create "an
almost helpless feeling.... It's total shock and awe." But while terror
makes some people shut up, it makes many more furious and determined to
strike back.
Imagine if the beaches at Dover and the skies over Westminster were filled
with robots controlled from Torah Borah, or Beijing, and could shoot us at
any time. Some would scuttle away - and many would be determined to kill "their"
people in revenge. The Lebanese editor Rami Khouri says that when Lebanon
was bombarded by largely unmanned Israeli drones in 2006, it only "enhanced
the spirit of defiance" and made more people back Hezbollah.
Is this a rational way to harness our genius for science and spend tens of
billions of pounds? The scientists who were essential to developing the
nuclear bomb - including Albert Einstein, Robert Oppenheimer, and Andrei
Sakharov - turned on their own creations in horror and begged for them to be
outlawed. Some distinguished robotics scientists, like Illah Nourbakhsh, are
getting in early, and saying the development of autonomous military robots
should be outlawed now.
There are some technologies that are so abhorrent to human beings that we
forbid them outright. We have banned war-lasers that permanently blind
people along with poison gas. The conveyor belt dragging us ever closer to a
world of robot wars can be stopped - if we choose to.
All this money and all this effort can be directed towards saving life, not
ever-madder ways of taking it. But we have to decide to do it. We have to
make the choice to look the warbot in the eye and say, firmly and forever, "Hasta
la vista, baby."
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
In the dark, in the silence, in a blink, the age
of the autonomous killer robot has arrived. It is happening. They are
deployed. And - at their current rate of acceleration - they will
become the dominant method of war for rich countries in the 21st
century. These facts sound, at first, preposterous. The idea of
machines that are designed to whirr out into the world and make their
own decisions to kill is an old sci-fi fantasy: picture a mechanical
Arnold Schwarzenegger blasting a truck and muttering: "Hasta la vista,
baby." But we live in a world of such whooshing technological
transformation that the concept has leaped in just five years from the
cinema screen to the battlefield - with barely anyone back home
noticing.
When the US invaded Iraq in 2003, they had no robots as part of their force.
By the end of 2005, they had 2,400. Today, they have 12,000, carrying out
33,000 missions a year. A report by the US Joint Forces Command says
autonomous robots will be the norm on the battlefield within 20 years.
The Nato forces now depend on a range of killer robots, largely designed by
the British Ministry of Defence labs privatised by Tony Blair in 2001. Every
time you hear about a "drone attack" against Afghanistan or
Pakistan, that's an unmanned robot dropping bombs on human beings. Push a
button and it flies away, kills, and comes home. Its robot-cousin on the
battlefields below is called SWORDS: a human-sized robot that can see 360
degrees around it and fire its machine-guns at any target it "chooses".
Fox News proudly calls it "the GI of the 21st century." And
billions are being spent on the next generation of warbots, which will leave
these models looking like the bulky box on which you used to play Pong.
At the moment, most are controlled by a soldier - often 7,500 miles away -
with a control panel. But insurgents are always inventing new ways to block
the signal from the control centre, which causes the robot to shut down and "die".
So the military is building "autonomy" into the robots: if they
lose contact, they start to make their own decisions, in line with a
pre-determined code.
This is "one of the most fundamental changes in the history of human
warfare," according to PW Singer, a former analyst for the Pentagon and
the CIA, in his must-read book, Wired For War: The Robotics Revolution and
Defence in the Twenty-First Century. Humans have been developing weapons
that enabled us to kill at ever-greater distances and in ever-greater
numbers for millennia, from the longbow to the cannon to the machine-gun to
the nuclear bomb. But these robots mark a different stage.
The earlier technologies made it possible for humans to decide to kill in more "sophisticated"
ways - but once you programme and unleash an autonomous robot, the war isn't
fought by you any more: it's fought by the machine. The subject of warfare
shifts.
The military claim this is a safer model of combat. Gordon Johnson of the
Pentagon's Joint Forces Command says of the warbots: "They're not
afraid. They don't forget their orders. They don't care if the guy next to
them has been shot. Will they do a better job than humans? Yes." Why
take a risk with your soldier's life, if he can stay in Arlington and kill
in Kandahar? Think of it as War 4.0.
But the evidence punctures this techno-optimism. We know the programming of
robots will regularly go wrong - because all technological programming
regularly goes wrong. Look at the place where robots are used most
frequently today: factories. Some 4 per cent of US factories have "major
robotics accidents" every year - a man having molten aluminium poured
over him, or a woman picked up and placed on a conveyor belt to be smashed
into the shape of a car. The former Japanese Prime Minister Junichiro
Koizumi was nearly killed a few years ago after a robot attacked him on a
tour of a factory. And remember: these are robots that aren't designed to
kill.
Think about how maddening it is to deal with a robot on the telephone when you
want to pay your phone bill. Now imagine that robot had a machine-gun
pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how
will they distinguish a combatant from a civilian? You can't appeal to a
robot for mercy; you can't activate its empathy. And afterwards, who do you
punish? Marc Garlasco, of Human Rights Watch, says: "War crimes need a
violation and an intent. A machine has no capacity to want to kill
civilians.... If they are incapable of intent, are they incapable of war
crimes?"
Robots do make war much easier - for the aggressor. You are taking much less
physical risk with your people, even as you kill more of theirs. One US
report recently claimed they will turn war into "an essentially
frictionless engineering exercise". As Larry Korb, Ronald Reagan's
assistant secretary of defence, put it: "It will make people think,
'Gee, warfare is easy.'"
If virtually no American forces had died in Vietnam, would the war have
stopped when it did - or would the systematic slaughter of the Vietnamese
people have continued for many more years? If "we" weren't losing
anyone in Afghanistan or Iraq, would the call for an end to the killing be
as loud? I'd like to think we are motivated primarily by compassion for
civilians on the other side, but I doubt it. Take "us" safely out
of the picture and we will be more willing to kill "them".
There is some evidence that warbots will also make us less inhibited in our
killing. When another human being is standing in front of you, when you can
stare into their eyes, it's hard to kill them. When they are half the world
away and little more than an avatar, it's easy. A young air force lieutenant
who fought through a warbot told Singer: "It's like a video game [with]
the ability to kill. It's like ... freaking cool."
When the US First Marine Expeditionary Force in Iraq was asked in 2006 what
kind of robotic support it needed, they said they had an "urgent
operational need" for a laser mounted on to an unmanned drone that
could cause "instantaneous burst-combustion of insurgent clothing, a
rapid death through violent trauma, and more probably a morbid combination
of both". The request said it should be like "long-range blow
torches or precision flame-throwers". They wanted to do with robots
things they would find almost unthinkable face-to-face.
While "we" will lose fewer people at first by fighting with warbots,
this way of fighting may well catalyse greater attacks on us in the long
run. US army staff sergeant Scott Smith boasts warbots create "an
almost helpless feeling.... It's total shock and awe." But while terror
makes some people shut up, it makes many more furious and determined to
strike back.
Imagine if the beaches at Dover and the skies over Westminster were filled
with robots controlled from Torah Borah, or Beijing, and could shoot us at
any time. Some would scuttle away - and many would be determined to kill "their"
people in revenge. The Lebanese editor Rami Khouri says that when Lebanon
was bombarded by largely unmanned Israeli drones in 2006, it only "enhanced
the spirit of defiance" and made more people back Hezbollah.
Is this a rational way to harness our genius for science and spend tens of
billions of pounds? The scientists who were essential to developing the
nuclear bomb - including Albert Einstein, Robert Oppenheimer, and Andrei
Sakharov - turned on their own creations in horror and begged for them to be
outlawed. Some distinguished robotics scientists, like Illah Nourbakhsh, are
getting in early, and saying the development of autonomous military robots
should be outlawed now.
There are some technologies that are so abhorrent to human beings that we
forbid them outright. We have banned war-lasers that permanently blind
people along with poison gas. The conveyor belt dragging us ever closer to a
world of robot wars can be stopped - if we choose to.
All this money and all this effort can be directed towards saving life, not
ever-madder ways of taking it. But we have to decide to do it. We have to
make the choice to look the warbot in the eye and say, firmly and forever, "Hasta
la vista, baby."
In the dark, in the silence, in a blink, the age
of the autonomous killer robot has arrived. It is happening. They are
deployed. And - at their current rate of acceleration - they will
become the dominant method of war for rich countries in the 21st
century. These facts sound, at first, preposterous. The idea of
machines that are designed to whirr out into the world and make their
own decisions to kill is an old sci-fi fantasy: picture a mechanical
Arnold Schwarzenegger blasting a truck and muttering: "Hasta la vista,
baby." But we live in a world of such whooshing technological
transformation that the concept has leaped in just five years from the
cinema screen to the battlefield - with barely anyone back home
noticing.
When the US invaded Iraq in 2003, they had no robots as part of their force.
By the end of 2005, they had 2,400. Today, they have 12,000, carrying out
33,000 missions a year. A report by the US Joint Forces Command says
autonomous robots will be the norm on the battlefield within 20 years.
The Nato forces now depend on a range of killer robots, largely designed by
the British Ministry of Defence labs privatised by Tony Blair in 2001. Every
time you hear about a "drone attack" against Afghanistan or
Pakistan, that's an unmanned robot dropping bombs on human beings. Push a
button and it flies away, kills, and comes home. Its robot-cousin on the
battlefields below is called SWORDS: a human-sized robot that can see 360
degrees around it and fire its machine-guns at any target it "chooses".
Fox News proudly calls it "the GI of the 21st century." And
billions are being spent on the next generation of warbots, which will leave
these models looking like the bulky box on which you used to play Pong.
At the moment, most are controlled by a soldier - often 7,500 miles away -
with a control panel. But insurgents are always inventing new ways to block
the signal from the control centre, which causes the robot to shut down and "die".
So the military is building "autonomy" into the robots: if they
lose contact, they start to make their own decisions, in line with a
pre-determined code.
This is "one of the most fundamental changes in the history of human
warfare," according to PW Singer, a former analyst for the Pentagon and
the CIA, in his must-read book, Wired For War: The Robotics Revolution and
Defence in the Twenty-First Century. Humans have been developing weapons
that enabled us to kill at ever-greater distances and in ever-greater
numbers for millennia, from the longbow to the cannon to the machine-gun to
the nuclear bomb. But these robots mark a different stage.
The earlier technologies made it possible for humans to decide to kill in more "sophisticated"
ways - but once you programme and unleash an autonomous robot, the war isn't
fought by you any more: it's fought by the machine. The subject of warfare
shifts.
The military claim this is a safer model of combat. Gordon Johnson of the
Pentagon's Joint Forces Command says of the warbots: "They're not
afraid. They don't forget their orders. They don't care if the guy next to
them has been shot. Will they do a better job than humans? Yes." Why
take a risk with your soldier's life, if he can stay in Arlington and kill
in Kandahar? Think of it as War 4.0.
But the evidence punctures this techno-optimism. We know the programming of
robots will regularly go wrong - because all technological programming
regularly goes wrong. Look at the place where robots are used most
frequently today: factories. Some 4 per cent of US factories have "major
robotics accidents" every year - a man having molten aluminium poured
over him, or a woman picked up and placed on a conveyor belt to be smashed
into the shape of a car. The former Japanese Prime Minister Junichiro
Koizumi was nearly killed a few years ago after a robot attacked him on a
tour of a factory. And remember: these are robots that aren't designed to
kill.
Think about how maddening it is to deal with a robot on the telephone when you
want to pay your phone bill. Now imagine that robot had a machine-gun
pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how
will they distinguish a combatant from a civilian? You can't appeal to a
robot for mercy; you can't activate its empathy. And afterwards, who do you
punish? Marc Garlasco, of Human Rights Watch, says: "War crimes need a
violation and an intent. A machine has no capacity to want to kill
civilians.... If they are incapable of intent, are they incapable of war
crimes?"
Robots do make war much easier - for the aggressor. You are taking much less
physical risk with your people, even as you kill more of theirs. One US
report recently claimed they will turn war into "an essentially
frictionless engineering exercise". As Larry Korb, Ronald Reagan's
assistant secretary of defence, put it: "It will make people think,
'Gee, warfare is easy.'"
If virtually no American forces had died in Vietnam, would the war have
stopped when it did - or would the systematic slaughter of the Vietnamese
people have continued for many more years? If "we" weren't losing
anyone in Afghanistan or Iraq, would the call for an end to the killing be
as loud? I'd like to think we are motivated primarily by compassion for
civilians on the other side, but I doubt it. Take "us" safely out
of the picture and we will be more willing to kill "them".
There is some evidence that warbots will also make us less inhibited in our
killing. When another human being is standing in front of you, when you can
stare into their eyes, it's hard to kill them. When they are half the world
away and little more than an avatar, it's easy. A young air force lieutenant
who fought through a warbot told Singer: "It's like a video game [with]
the ability to kill. It's like ... freaking cool."
When the US First Marine Expeditionary Force in Iraq was asked in 2006 what
kind of robotic support it needed, they said they had an "urgent
operational need" for a laser mounted on to an unmanned drone that
could cause "instantaneous burst-combustion of insurgent clothing, a
rapid death through violent trauma, and more probably a morbid combination
of both". The request said it should be like "long-range blow
torches or precision flame-throwers". They wanted to do with robots
things they would find almost unthinkable face-to-face.
While "we" will lose fewer people at first by fighting with warbots,
this way of fighting may well catalyse greater attacks on us in the long
run. US army staff sergeant Scott Smith boasts warbots create "an
almost helpless feeling.... It's total shock and awe." But while terror
makes some people shut up, it makes many more furious and determined to
strike back.
Imagine if the beaches at Dover and the skies over Westminster were filled
with robots controlled from Torah Borah, or Beijing, and could shoot us at
any time. Some would scuttle away - and many would be determined to kill "their"
people in revenge. The Lebanese editor Rami Khouri says that when Lebanon
was bombarded by largely unmanned Israeli drones in 2006, it only "enhanced
the spirit of defiance" and made more people back Hezbollah.
Is this a rational way to harness our genius for science and spend tens of
billions of pounds? The scientists who were essential to developing the
nuclear bomb - including Albert Einstein, Robert Oppenheimer, and Andrei
Sakharov - turned on their own creations in horror and begged for them to be
outlawed. Some distinguished robotics scientists, like Illah Nourbakhsh, are
getting in early, and saying the development of autonomous military robots
should be outlawed now.
There are some technologies that are so abhorrent to human beings that we
forbid them outright. We have banned war-lasers that permanently blind
people along with poison gas. The conveyor belt dragging us ever closer to a
world of robot wars can be stopped - if we choose to.
All this money and all this effort can be directed towards saving life, not
ever-madder ways of taking it. But we have to decide to do it. We have to
make the choice to look the warbot in the eye and say, firmly and forever, "Hasta
la vista, baby."