

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

A screen displays HTML, a code used for webpages.
The report says the U.S. government must move "quickly and decisively" to address the threat of artificial intelligence.
A report released on Monday that was commissioned by the U.S. State Department warns that artificial intelligence could pose an "extinction-level threat."
"Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control—and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks—there is a clear and urgent need for the U.S. government to intervene," the report states.
The report compares the development of AI to the development of nuclear weapons and claims it might "destabilize global security" if it's not properly regulated. The report says the U.S. government must move "quickly and decisively" to address the threat of AI.
🚨 A new report commissioned by the U.S. government has identified "urgent and growing" national security risks "reminiscent of the introduction of nuclear weapons" - including "extinction-level threat to the human species" - from the development of advanced AI & artificial… pic.twitter.com/SvLrdEzz9e
— Future of Life Institute (@FLI_org) March 11, 2024
"The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic, and Meta—as part of their research," Time reports. "Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision making by the executives who control their companies."
The report recommends that the U.S. create a new federal agency to regulate the companies developing new AI tools and limit the growth of AI. Experts say such a move does not seem likely.
“I think that this recommendation is extremely unlikely to be adopted by the United States government,” Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), told Time.
AI is a rapidly developing, and experts have warned that many of the companies creating new AI tools are not acting responsibly. A report from earlier this month also noted how generative AI is increasing the spread of climate disinformation and using up valuable resources.
The U.S. was one of 18 countries that joined an agreement in November to keep AI systems "secure by design," but further action will be needed to accomplish that goal.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
A report released on Monday that was commissioned by the U.S. State Department warns that artificial intelligence could pose an "extinction-level threat."
"Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control—and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks—there is a clear and urgent need for the U.S. government to intervene," the report states.
The report compares the development of AI to the development of nuclear weapons and claims it might "destabilize global security" if it's not properly regulated. The report says the U.S. government must move "quickly and decisively" to address the threat of AI.
🚨 A new report commissioned by the U.S. government has identified "urgent and growing" national security risks "reminiscent of the introduction of nuclear weapons" - including "extinction-level threat to the human species" - from the development of advanced AI & artificial… pic.twitter.com/SvLrdEzz9e
— Future of Life Institute (@FLI_org) March 11, 2024
"The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic, and Meta—as part of their research," Time reports. "Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision making by the executives who control their companies."
The report recommends that the U.S. create a new federal agency to regulate the companies developing new AI tools and limit the growth of AI. Experts say such a move does not seem likely.
“I think that this recommendation is extremely unlikely to be adopted by the United States government,” Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), told Time.
AI is a rapidly developing, and experts have warned that many of the companies creating new AI tools are not acting responsibly. A report from earlier this month also noted how generative AI is increasing the spread of climate disinformation and using up valuable resources.
The U.S. was one of 18 countries that joined an agreement in November to keep AI systems "secure by design," but further action will be needed to accomplish that goal.
A report released on Monday that was commissioned by the U.S. State Department warns that artificial intelligence could pose an "extinction-level threat."
"Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control—and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks—there is a clear and urgent need for the U.S. government to intervene," the report states.
The report compares the development of AI to the development of nuclear weapons and claims it might "destabilize global security" if it's not properly regulated. The report says the U.S. government must move "quickly and decisively" to address the threat of AI.
🚨 A new report commissioned by the U.S. government has identified "urgent and growing" national security risks "reminiscent of the introduction of nuclear weapons" - including "extinction-level threat to the human species" - from the development of advanced AI & artificial… pic.twitter.com/SvLrdEzz9e
— Future of Life Institute (@FLI_org) March 11, 2024
"The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic, and Meta—as part of their research," Time reports. "Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision making by the executives who control their companies."
The report recommends that the U.S. create a new federal agency to regulate the companies developing new AI tools and limit the growth of AI. Experts say such a move does not seem likely.
“I think that this recommendation is extremely unlikely to be adopted by the United States government,” Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), told Time.
AI is a rapidly developing, and experts have warned that many of the companies creating new AI tools are not acting responsibly. A report from earlier this month also noted how generative AI is increasing the spread of climate disinformation and using up valuable resources.
The U.S. was one of 18 countries that joined an agreement in November to keep AI systems "secure by design," but further action will be needed to accomplish that goal.