Jul 19, 2017
Urgent warnings about the danger of autonomous weapons systems have come from two very different segments of American life this week. On Tuesday, the second highest-ranking general in the U.S. military testified at a Senate hearing that the use of robots during warfare could endanger human lives--echoing concerns brought up by inventor Elon Musk the previous weekend.
Gen. Paul Selva spoke about automation at his confirmation hearing before the Senate Armed Services Committee, saying that the "ethical rules of war" should be kept in place even as artificial intelligence (AI) and drone technology advances, "lest we unleash on humanity a set of robots that we don't know how to control."
The Defense Department currently mandates that a human must control all actions taken by a drone. But at the hearing, Sen. Gary Peters (D-Mich.) suggested that by enforcing that requirement, which is set to expire this year, the U.S. could fall behind other countries including Russia. Peters cited recent reports of Russia's "ambition to employ AI-directed weapons equipped with a neural network capable of identifying and engaging targets," and to sell those weapons to other countries.
"Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day," Peters said.
Selva firmly stated his view that humans should retain decision-making power in the U.S. military.
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva told the committee.
In an open letter in 2015, Tesla and SpaceX CEO Elon Musk joined with scientist Stephen Hawking to warn against competing with other countries to develop AI for military purposes.
"Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control," the letter said.
Musk has previously called the development of robots that can make their own decisions, "summoning the demon." Days before Gen. Selva's hearing, Musk spoke at the National Governors Association about the potential for an uncontrollable contingent of robots in the future.
The inventor acknowledged the risks AI poses for American workers, but added that the concerns go beyond employment. "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said.
He urged governors throughout the U.S. to start thinking seriously now about how to regulate robotics--before AI becomes an issue that's out of humans' control.
"Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late," warned Musk.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
Urgent warnings about the danger of autonomous weapons systems have come from two very different segments of American life this week. On Tuesday, the second highest-ranking general in the U.S. military testified at a Senate hearing that the use of robots during warfare could endanger human lives--echoing concerns brought up by inventor Elon Musk the previous weekend.
Gen. Paul Selva spoke about automation at his confirmation hearing before the Senate Armed Services Committee, saying that the "ethical rules of war" should be kept in place even as artificial intelligence (AI) and drone technology advances, "lest we unleash on humanity a set of robots that we don't know how to control."
The Defense Department currently mandates that a human must control all actions taken by a drone. But at the hearing, Sen. Gary Peters (D-Mich.) suggested that by enforcing that requirement, which is set to expire this year, the U.S. could fall behind other countries including Russia. Peters cited recent reports of Russia's "ambition to employ AI-directed weapons equipped with a neural network capable of identifying and engaging targets," and to sell those weapons to other countries.
"Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day," Peters said.
Selva firmly stated his view that humans should retain decision-making power in the U.S. military.
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva told the committee.
In an open letter in 2015, Tesla and SpaceX CEO Elon Musk joined with scientist Stephen Hawking to warn against competing with other countries to develop AI for military purposes.
"Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control," the letter said.
Musk has previously called the development of robots that can make their own decisions, "summoning the demon." Days before Gen. Selva's hearing, Musk spoke at the National Governors Association about the potential for an uncontrollable contingent of robots in the future.
The inventor acknowledged the risks AI poses for American workers, but added that the concerns go beyond employment. "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said.
He urged governors throughout the U.S. to start thinking seriously now about how to regulate robotics--before AI becomes an issue that's out of humans' control.
"Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late," warned Musk.
Urgent warnings about the danger of autonomous weapons systems have come from two very different segments of American life this week. On Tuesday, the second highest-ranking general in the U.S. military testified at a Senate hearing that the use of robots during warfare could endanger human lives--echoing concerns brought up by inventor Elon Musk the previous weekend.
Gen. Paul Selva spoke about automation at his confirmation hearing before the Senate Armed Services Committee, saying that the "ethical rules of war" should be kept in place even as artificial intelligence (AI) and drone technology advances, "lest we unleash on humanity a set of robots that we don't know how to control."
The Defense Department currently mandates that a human must control all actions taken by a drone. But at the hearing, Sen. Gary Peters (D-Mich.) suggested that by enforcing that requirement, which is set to expire this year, the U.S. could fall behind other countries including Russia. Peters cited recent reports of Russia's "ambition to employ AI-directed weapons equipped with a neural network capable of identifying and engaging targets," and to sell those weapons to other countries.
"Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day," Peters said.
Selva firmly stated his view that humans should retain decision-making power in the U.S. military.
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva told the committee.
In an open letter in 2015, Tesla and SpaceX CEO Elon Musk joined with scientist Stephen Hawking to warn against competing with other countries to develop AI for military purposes.
"Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control," the letter said.
Musk has previously called the development of robots that can make their own decisions, "summoning the demon." Days before Gen. Selva's hearing, Musk spoke at the National Governors Association about the potential for an uncontrollable contingent of robots in the future.
The inventor acknowledged the risks AI poses for American workers, but added that the concerns go beyond employment. "AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said.
He urged governors throughout the U.S. to start thinking seriously now about how to regulate robotics--before AI becomes an issue that's out of humans' control.
"Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late," warned Musk.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.