(Photo: Olivier Douliery/AFP via Getty Images)
Experts Fear Future AI Could Cause 'Nuclear-Level Catastrophe'
Asked about the chances of the technology "wiping out humanity," AI pioneer Geoffrey Hinton warned that "it's not inconceivable."
To donate by check, phone, or other method, see our More Ways to Give page.
Asked about the chances of the technology "wiping out humanity," AI pioneer Geoffrey Hinton warned that "it's not inconceivable."
While nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," 36% worry that AI decisions "could cause nuclear-level catastrophe."
Those survey findings are included in the 2023 AI Index Report, an annual assessment of the fast-growing industry assembled by the Stanford Institute for Human-Centered Artificial Intelligence and published earlier this month.
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," says the report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
As Al Jazeerareported Friday, the analysis "comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces."
Notably, the survey measured the opinions of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—last May and June, months before the November release of OpenAI's ChatGPT "took the tech world by storm," the news outlet reported.
"A misaligned superintelligent AGI could cause grievous harm to the world."
Just three weeks ago, Geoffrey Hinton, considered the "godfather of artificial intelligence," told CBS News' Brook Silva-Braga that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That alarming potential doesn't necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), which would encompass computers developing and acting on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less."
Pressed by Silva-Braga if it could happen sooner, Hinton conceded that he wouldn't rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. According to the survey of computer scientists conducted last year, 57% said that "recent progress is moving us toward AGI," and 58% agreed that "AGI is an important concern."
In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 25,000 people have signed an open letter published two weeks ago that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
The Financial Timesreported Friday that Tesla and Twitter CEO Elon Musk, who signed the letter calling for a pause, is "developing plans to launch a new artificial intelligence start-up to compete with" OpenAI.
"It's very reasonable for people to be worrying about those issues now."
Regarding AGI, Hinton said: "It's very reasonable for people to be worrying about those issues now, even though it's not going to happen in the next year or two. People should be thinking about those issues."
While AGI may still be a few years away, fears are already mounting that existing AI tools—including chatbots spouting lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are poised to turbocharge the spread of misinformation.
According to a 2022 IPSOS poll of the general public included in the new Stanford report, people in the U.S. are particularly wary of AI, with just 35% agreeing that "products and services using AI had more benefits than drawbacks," compared with 78% of people in China, 76% in Saudi Arabia, and 71% in India.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced this week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
Axiosreported Thursday that Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
While nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," 36% worry that AI decisions "could cause nuclear-level catastrophe."
Those survey findings are included in the 2023 AI Index Report, an annual assessment of the fast-growing industry assembled by the Stanford Institute for Human-Centered Artificial Intelligence and published earlier this month.
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," says the report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
As Al Jazeerareported Friday, the analysis "comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces."
Notably, the survey measured the opinions of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—last May and June, months before the November release of OpenAI's ChatGPT "took the tech world by storm," the news outlet reported.
"A misaligned superintelligent AGI could cause grievous harm to the world."
Just three weeks ago, Geoffrey Hinton, considered the "godfather of artificial intelligence," told CBS News' Brook Silva-Braga that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That alarming potential doesn't necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), which would encompass computers developing and acting on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less."
Pressed by Silva-Braga if it could happen sooner, Hinton conceded that he wouldn't rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. According to the survey of computer scientists conducted last year, 57% said that "recent progress is moving us toward AGI," and 58% agreed that "AGI is an important concern."
In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 25,000 people have signed an open letter published two weeks ago that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
The Financial Timesreported Friday that Tesla and Twitter CEO Elon Musk, who signed the letter calling for a pause, is "developing plans to launch a new artificial intelligence start-up to compete with" OpenAI.
"It's very reasonable for people to be worrying about those issues now."
Regarding AGI, Hinton said: "It's very reasonable for people to be worrying about those issues now, even though it's not going to happen in the next year or two. People should be thinking about those issues."
While AGI may still be a few years away, fears are already mounting that existing AI tools—including chatbots spouting lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are poised to turbocharge the spread of misinformation.
According to a 2022 IPSOS poll of the general public included in the new Stanford report, people in the U.S. are particularly wary of AI, with just 35% agreeing that "products and services using AI had more benefits than drawbacks," compared with 78% of people in China, 76% in Saudi Arabia, and 71% in India.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced this week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
Axiosreported Thursday that Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."
While nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," 36% worry that AI decisions "could cause nuclear-level catastrophe."
Those survey findings are included in the 2023 AI Index Report, an annual assessment of the fast-growing industry assembled by the Stanford Institute for Human-Centered Artificial Intelligence and published earlier this month.
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," says the report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
As Al Jazeerareported Friday, the analysis "comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces."
Notably, the survey measured the opinions of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—last May and June, months before the November release of OpenAI's ChatGPT "took the tech world by storm," the news outlet reported.
"A misaligned superintelligent AGI could cause grievous harm to the world."
Just three weeks ago, Geoffrey Hinton, considered the "godfather of artificial intelligence," told CBS News' Brook Silva-Braga that the rapidly advancing technology's potential impacts are comparable to "the Industrial Revolution, or electricity, or maybe the wheel."
Asked about the chances of the technology "wiping out humanity," Hinton warned that "it's not inconceivable."
That alarming potential doesn't necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called "artificial general intelligence" (AGI), which would encompass computers developing and acting on their own ideas.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI," Hinton told CBS News. "Now I think it may be 20 years or less."
Pressed by Silva-Braga if it could happen sooner, Hinton conceded that he wouldn't rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he "would have said, 'No way.'"
"We have to think hard about how to control that," said Hinton. Asked if that's possible, Hinton said, "We don't know, we haven't been there yet, but we can try."
The AI pioneer is far from alone. According to the survey of computer scientists conducted last year, 57% said that "recent progress is moving us toward AGI," and 58% agreed that "AGI is an important concern."
In February, OpenAI CEO Sam Altman wrote in a company blog post: "The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world."
More than 25,000 people have signed an open letter published two weeks ago that calls for a six-month moratorium on training AI systems beyond the level of OpenAI's latest chatbot, GPT-4, although Altman is not among them.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," says the letter.
The Financial Timesreported Friday that Tesla and Twitter CEO Elon Musk, who signed the letter calling for a pause, is "developing plans to launch a new artificial intelligence start-up to compete with" OpenAI.
"It's very reasonable for people to be worrying about those issues now."
Regarding AGI, Hinton said: "It's very reasonable for people to be worrying about those issues now, even though it's not going to happen in the next year or two. People should be thinking about those issues."
While AGI may still be a few years away, fears are already mounting that existing AI tools—including chatbots spouting lies, face-swapping apps generating fake videos, and cloned voices committing fraud—are poised to turbocharge the spread of misinformation.
According to a 2022 IPSOS poll of the general public included in the new Stanford report, people in the U.S. are particularly wary of AI, with just 35% agreeing that "products and services using AI had more benefits than drawbacks," compared with 78% of people in China, 76% in Saudi Arabia, and 71% in India.
Amid "growing regulatory interest" in an AI "accountability mechanism," the Biden administration announced this week that it is seeking public input on measures that could be implemented to ensure that "AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
Axiosreported Thursday that Senate Majority Leader Chuck Schumer (D-N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology."