SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The European Union reached an agreement on December 8, 2023 regarding the use of artificial intelligence.
"The only thing that can force those big companies to do more research on safety is government regulation."
Warning that the pace of development of artificial intelligence is "much faster" than he anticipated and is taking place in the absence of far-reaching regulations, the computer scientist often called the "Godfather of AI" on Friday said he believes chances are growing that AI could wipe out humanity.
Speaking to BBC Radio 4's "Today" program, Geoffrey Hinton said there is a "10% to 20%" chance AI could lead to human extinction in the next three decades.
Previously Hinton had said he saw a 10% chance of that happening.
"We've never had to deal with things more intelligent than ourselves before," Hinton explained. "And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
Hinton, who was awarded the Nobel Prize in physics this year for his research into machine learning and AI, left his job at Google last year, saying he wanted to be able to speak out more about the dangers of unregulated AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely."
He has warned that AI chatbots could be used by authoritarian leaders to manipulate the public, and said last year that "the kind of intelligence we're developing is very different from the intelligence we have."
On Friday, Hinton said he is particularly worried that "the invisible hand" of the market will not keep humans safe from a technology that surpasses their intelligence, and called for strict regulations of AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," said Hinton.
More than 120 bills have been proposed in the U.S. Congress to regulate AI robocalls, the technology's role in national security, and other issues, while the Biden administration has taken some action to rein in AI development.
An executive order calling for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" said that "harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks." President-elect Donald Trump is expected to rescind the order.
The White House Blueprint for an AI Bill of Rights calls for safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation when AI is used, and the ability to opt out of automated systems.
But the European Union's Artificial Intelligence Act was a deemed a "failure" by rights advocates this year, after industry lobbying helped ensure the law included numerous loopholes and exemptions for law enforcement and migration authorities.
"The only thing that can force those big companies to do more research on safety," said Hinton on Friday, "is government regulation."
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
Warning that the pace of development of artificial intelligence is "much faster" than he anticipated and is taking place in the absence of far-reaching regulations, the computer scientist often called the "Godfather of AI" on Friday said he believes chances are growing that AI could wipe out humanity.
Speaking to BBC Radio 4's "Today" program, Geoffrey Hinton said there is a "10% to 20%" chance AI could lead to human extinction in the next three decades.
Previously Hinton had said he saw a 10% chance of that happening.
"We've never had to deal with things more intelligent than ourselves before," Hinton explained. "And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
Hinton, who was awarded the Nobel Prize in physics this year for his research into machine learning and AI, left his job at Google last year, saying he wanted to be able to speak out more about the dangers of unregulated AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely."
He has warned that AI chatbots could be used by authoritarian leaders to manipulate the public, and said last year that "the kind of intelligence we're developing is very different from the intelligence we have."
On Friday, Hinton said he is particularly worried that "the invisible hand" of the market will not keep humans safe from a technology that surpasses their intelligence, and called for strict regulations of AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," said Hinton.
More than 120 bills have been proposed in the U.S. Congress to regulate AI robocalls, the technology's role in national security, and other issues, while the Biden administration has taken some action to rein in AI development.
An executive order calling for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" said that "harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks." President-elect Donald Trump is expected to rescind the order.
The White House Blueprint for an AI Bill of Rights calls for safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation when AI is used, and the ability to opt out of automated systems.
But the European Union's Artificial Intelligence Act was a deemed a "failure" by rights advocates this year, after industry lobbying helped ensure the law included numerous loopholes and exemptions for law enforcement and migration authorities.
"The only thing that can force those big companies to do more research on safety," said Hinton on Friday, "is government regulation."
Warning that the pace of development of artificial intelligence is "much faster" than he anticipated and is taking place in the absence of far-reaching regulations, the computer scientist often called the "Godfather of AI" on Friday said he believes chances are growing that AI could wipe out humanity.
Speaking to BBC Radio 4's "Today" program, Geoffrey Hinton said there is a "10% to 20%" chance AI could lead to human extinction in the next three decades.
Previously Hinton had said he saw a 10% chance of that happening.
"We've never had to deal with things more intelligent than ourselves before," Hinton explained. "And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
Hinton, who was awarded the Nobel Prize in physics this year for his research into machine learning and AI, left his job at Google last year, saying he wanted to be able to speak out more about the dangers of unregulated AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely."
He has warned that AI chatbots could be used by authoritarian leaders to manipulate the public, and said last year that "the kind of intelligence we're developing is very different from the intelligence we have."
On Friday, Hinton said he is particularly worried that "the invisible hand" of the market will not keep humans safe from a technology that surpasses their intelligence, and called for strict regulations of AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," said Hinton.
More than 120 bills have been proposed in the U.S. Congress to regulate AI robocalls, the technology's role in national security, and other issues, while the Biden administration has taken some action to rein in AI development.
An executive order calling for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" said that "harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks." President-elect Donald Trump is expected to rescind the order.
The White House Blueprint for an AI Bill of Rights calls for safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation when AI is used, and the ability to opt out of automated systems.
But the European Union's Artificial Intelligence Act was a deemed a "failure" by rights advocates this year, after industry lobbying helped ensure the law included numerous loopholes and exemptions for law enforcement and migration authorities.
"The only thing that can force those big companies to do more research on safety," said Hinton on Friday, "is government regulation."