

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

Senator Chris Murphy speaks at the rally to Say NO to Tax Breaks for Billionaires & Corporations at US Capitol on April 10, 2025 in Washington, DC.
"The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so," said AI company Anthropic.
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as "the first documented case of a large-scale cyberattack executed without substantial human intervention."
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted "large tech companies, financial institutions, chemical manufacturing companies, and government agencies."
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
"The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so," the firm said. "With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers."
Anthropic went on to say that hackers could now use AI to carry tasks such as "analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator," which could open the door to "less experienced and resourced groups" carrying out some of the most sophisticated attack operations.
The company concluded by warning that "the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical."
This cybersecurity strategy wasn't sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
"Guys wake the f up," he wrote in a social media post. "This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow."
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
"For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices," he explained. "We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute."
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
"I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can," he said. "Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?"
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as "the first documented case of a large-scale cyberattack executed without substantial human intervention."
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted "large tech companies, financial institutions, chemical manufacturing companies, and government agencies."
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
"The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so," the firm said. "With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers."
Anthropic went on to say that hackers could now use AI to carry tasks such as "analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator," which could open the door to "less experienced and resourced groups" carrying out some of the most sophisticated attack operations.
The company concluded by warning that "the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical."
This cybersecurity strategy wasn't sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
"Guys wake the f up," he wrote in a social media post. "This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow."
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
"For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices," he explained. "We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute."
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
"I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can," he said. "Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?"
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as "the first documented case of a large-scale cyberattack executed without substantial human intervention."
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted "large tech companies, financial institutions, chemical manufacturing companies, and government agencies."
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
"The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so," the firm said. "With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers."
Anthropic went on to say that hackers could now use AI to carry tasks such as "analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator," which could open the door to "less experienced and resourced groups" carrying out some of the most sophisticated attack operations.
The company concluded by warning that "the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical."
This cybersecurity strategy wasn't sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
"Guys wake the f up," he wrote in a social media post. "This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow."
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
"For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices," he explained. "We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute."
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
"I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can," he said. "Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?"