Apr 05, 2019
A pioneer in the field of artificial intelligence warned that "dangers of abuse" of AI "are very real."
The warning from Canadian computer science professor and leading AI researcher Yoshua Bengio came in a Q&A with the journal Nature published Thursday.
The interview was conducted back in January, before Bengio was named along with two others the latest recipients of the Turing Award, a prize dubbed the "Nobel Prize of Computing."
Bengio told Nature that "we have to raise flags before bad things happen" in terms of irresponsible use of AI.
Unfortunately, "what is most concerning is not happening in broad daylight" but "in military labs, in security organizations, in private companies providing services to governments or the police," he said.
In particular, Bengio said, he is concerned with so-called killer drones--lethal autonomous weapons--and surveillance, which can be abused by authoritarian governments.
AI "can be used by those in power to keep that power, and to increase it," said Bengio, and be used "to worsen gender or racial discrimination."
Some sort of government or international regulatory framework needs to be in place to put a check on AI, added Bengio: "Self-regulation is not going to work."
It's not the first time Bengio expressed concerns about possible abuse of AI. This week he joined over two dozen other AI researchers in calling on Amazon to stop selling its facial-recognition software Rekognition to police departments.
The group of "concerned researchers" wrote in a post on Medium that "legislation and safeguards to prevent misuse are not in place."
Bengio also served on the steering committee for the recently-unveiled Montreal Declaration for Responsible Development of Artificial Intelligence. Among the declaration's 10 principles are that the "development and use of [artificial intelligence systems] AIS must contribute to the creation of a just and equitable society."
"Generally speaking," he wrote at The Conversation highlighting the need for the declaration, "scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate."
"And in this debate, I have come to realize that society has given me a voice--that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI."
"So, for me, it is now more than a responsibility," he said. "It is my duty. I have no choice."
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
A pioneer in the field of artificial intelligence warned that "dangers of abuse" of AI "are very real."
The warning from Canadian computer science professor and leading AI researcher Yoshua Bengio came in a Q&A with the journal Nature published Thursday.
The interview was conducted back in January, before Bengio was named along with two others the latest recipients of the Turing Award, a prize dubbed the "Nobel Prize of Computing."
Bengio told Nature that "we have to raise flags before bad things happen" in terms of irresponsible use of AI.
Unfortunately, "what is most concerning is not happening in broad daylight" but "in military labs, in security organizations, in private companies providing services to governments or the police," he said.
In particular, Bengio said, he is concerned with so-called killer drones--lethal autonomous weapons--and surveillance, which can be abused by authoritarian governments.
AI "can be used by those in power to keep that power, and to increase it," said Bengio, and be used "to worsen gender or racial discrimination."
Some sort of government or international regulatory framework needs to be in place to put a check on AI, added Bengio: "Self-regulation is not going to work."
It's not the first time Bengio expressed concerns about possible abuse of AI. This week he joined over two dozen other AI researchers in calling on Amazon to stop selling its facial-recognition software Rekognition to police departments.
The group of "concerned researchers" wrote in a post on Medium that "legislation and safeguards to prevent misuse are not in place."
Bengio also served on the steering committee for the recently-unveiled Montreal Declaration for Responsible Development of Artificial Intelligence. Among the declaration's 10 principles are that the "development and use of [artificial intelligence systems] AIS must contribute to the creation of a just and equitable society."
"Generally speaking," he wrote at The Conversation highlighting the need for the declaration, "scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate."
"And in this debate, I have come to realize that society has given me a voice--that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI."
"So, for me, it is now more than a responsibility," he said. "It is my duty. I have no choice."
A pioneer in the field of artificial intelligence warned that "dangers of abuse" of AI "are very real."
The warning from Canadian computer science professor and leading AI researcher Yoshua Bengio came in a Q&A with the journal Nature published Thursday.
The interview was conducted back in January, before Bengio was named along with two others the latest recipients of the Turing Award, a prize dubbed the "Nobel Prize of Computing."
Bengio told Nature that "we have to raise flags before bad things happen" in terms of irresponsible use of AI.
Unfortunately, "what is most concerning is not happening in broad daylight" but "in military labs, in security organizations, in private companies providing services to governments or the police," he said.
In particular, Bengio said, he is concerned with so-called killer drones--lethal autonomous weapons--and surveillance, which can be abused by authoritarian governments.
AI "can be used by those in power to keep that power, and to increase it," said Bengio, and be used "to worsen gender or racial discrimination."
Some sort of government or international regulatory framework needs to be in place to put a check on AI, added Bengio: "Self-regulation is not going to work."
It's not the first time Bengio expressed concerns about possible abuse of AI. This week he joined over two dozen other AI researchers in calling on Amazon to stop selling its facial-recognition software Rekognition to police departments.
The group of "concerned researchers" wrote in a post on Medium that "legislation and safeguards to prevent misuse are not in place."
Bengio also served on the steering committee for the recently-unveiled Montreal Declaration for Responsible Development of Artificial Intelligence. Among the declaration's 10 principles are that the "development and use of [artificial intelligence systems] AIS must contribute to the creation of a just and equitable society."
"Generally speaking," he wrote at The Conversation highlighting the need for the declaration, "scientists tend to avoid getting too involved in politics. But when there are issues that concern them and that will have a major impact on society, they must assume their responsibility and become part of the debate."
"And in this debate, I have come to realize that society has given me a voice--that governments and the media were interested in what I had to say on these topics because of my role as a pioneer in the scientific development of AI."
"So, for me, it is now more than a responsibility," he said. "It is my duty. I have no choice."
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.