SUBSCRIBE TO OUR FREE NEWSLETTER

SUBSCRIBE TO OUR FREE NEWSLETTER

Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

* indicates required
5
#000000
#FFFFFF
code

A screen displays HTML, a code used for webpages.

(Photo: Yaroslav Kushta via Getty Creative)

State Department-Commissioned Report Warns AI Could Be an 'Extinction-Level' Threat

The report says the U.S. government must move "quickly and decisively" to address the threat of artificial intelligence.

A report released on Monday that was commissioned by the U.S. State Department warns that artificial intelligence could pose an "extinction-level threat."

"Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control—and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks—there is a clear and urgent need for the U.S. government to intervene," the report states.

The report compares the development of AI to the development of nuclear weapons and claims it might "destabilize global security" if it's not properly regulated. The report says the U.S. government must move "quickly and decisively" to address the threat of AI.

 

"The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies—like OpenAI, Google DeepMind, Anthropic, and Meta—as part of their research," Time reports. "Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decision making by the executives who control their companies."

The report recommends that the U.S. create a new federal agency to regulate the companies developing new AI tools and limit the growth of AI. Experts say such a move does not seem likely.

“I think that this recommendation is extremely unlikely to be adopted by the United States government,” Greg Allen, director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS), told Time.

AI is a rapidly developing, and experts have warned that many of the companies creating new AI tools are not acting responsibly. A report from earlier this month also noted how generative AI is increasing the spread of climate disinformation and using up valuable resources.

The U.S. was one of 18 countries that joined an agreement in November to keep AI systems "secure by design," but further action will be needed to accomplish that goal.

Why Your Ongoing Support Is Essential


Donald Trump’s attacks on democracy, justice, and a free press are escalating — putting everything we stand for at risk. We believe a better world is possible, but we can’t get there without your support.

Common Dreams stands apart. We answer only to you — our readers, activists, and changemakers — not to billionaires or corporations. Our independence allows us to cover the vital stories that others won’t, spotlighting movements for peace, equality, and human rights.

Right now, our work faces unprecedented challenges. Misinformation is spreading, journalists are under attack, and financial pressures are mounting. As a reader-supported, nonprofit newsroom, your support is crucial to keep this journalism alive.

Whatever you can give — $10, $25, or $100 — helps us stay strong and responsive when the world needs us most.

Together, we’ll continue to build the independent, courageous journalism our movement relies on. Thank you for being part of this community.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.