SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A developer demonstrates an autonomous tank at the Eurosatory 2018 Show, on June 10, 2018 in Villepinte, France. (Photo: Christophe Morin/IP3/Getty Images)
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."
Thousands of artificial intelligence (AI) experts and developers have signed a pledge vowing to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," and imploring governments worldwide to work together to "create a future with strong international norms, regulations, and laws" barring so-called killer robots.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
--Anthony Aguirre,
UC-Santa Cruz
More than 160 companies and groups from three dozen countries and 2,400 individuals from 90 countries are backing the pledge, which was developed by the Boston-based Future of Life Institute (FLI) and unveiled Wednesday during the annual International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," declared FLI president and MIT professor Max Tegmark. "AI has huge potential to help the world--if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
As Anthony Aguirre, a professor at the University of California-Santa Cruz and pledge signatory, told CNN, "We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody."
Signatory Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, explained that the pledge has the potential to sway public opinion by shaming developers of killer robots, also referred to called lethal autonomous weapons systems.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines," Bengio pointed out in an interview with the Guardian. "American companies have stopped building land mines."
Lucy Suchman, a professor at England's Lancaster University, emphasized the importance of AI researchers staying involved with how their inventions are used, noting that as a developer she would, "first, commit to tracking the subsequent uses of my technologies and speaking out against their application to automating target recognition and, second, refuse to participate in either advising or directly helping to incorporate the technology into an autonomous weapon system."
Other high-profile supporters of the pledge include SpaceX and Tesla Motors CEO Elon Musk; Skype founder Jaan Tallinn; Jeffrey Dean, Google's lead of research and machine intelligence; and Demis Hassabis, Shane Legg, and Mustafa Suleyman, the co-founders of DeepMind.
As AI technology has continued to advance, the United Nations has convened a group of governmental experts to address mounting concerns raised by human rights organizations, advocacy groups, military leaders, lawmakers, and tech experts--many who, for years, have demanded a global ban on killer robots.
In recent years, tech experts have used IJCAI as an opportunity to pressure world leaders to outlaw autonomous weapons which, as the new pledge warns, "could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems." Without a ban on such weaponry, they "could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage."