SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The Google logo is displayed on a smartphone screen placed on a reflective surface onto which the Department of War emblem is projected, in Creteil, France, on May 4, 2026.
"It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward," said one expert.
Watchdog group Public Citizen is raising alarms after tech giant Google on Monday revealed that a group of criminal hackers used artificial intelligence to detect a previously unidentified software vulnerability.
As reported by The New York Times, Google said that it had "high confidence" that the hackers used AI to discover and exploit the vulnerability.
While Google said that the attack had been thwarted, the Times noted that the company "did not say precisely when the thwarted attack happened, whom it was targeting, or which AI platform the hackers used."
While the discovery of so-called "zero-day vulnerabilities" were once a rare occurrence, the proliferation of AI models has made them much easier for hackers to detect. In fact, AI software vendor Anthropic earlier this year said that it had developed a model that was so good at exploiting these vulnerabilities that it would not be releasing it publicly.
John Hultquist, chief analyst at Google Threat Intelligence Group, said in an interview with Cyberscoop that this kind of AI-assisted attack "is probably the tip of the iceberg and it’s certainly not going to be the last" to occur.
“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist explained. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”
JB Branch, AI governance and technology policy counsel at Public Citizen, said the attempted AI exploit once against showed how reckless Big Tech has been in aggressively pushing this technology out the door.
"Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences," Branch said. "It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward."
Branch also said it was well past time for Congress to step in and slap strict guardrails on the development of AI.
"We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public," he said. "Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society."
While calls for more AI regulation have grown in recent months, Silicon Valley elites are planning to spend massive sums of money in this year's midterm elections to prevent candidates who support AI regulation from winning public office.
Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and other AI heavyweights—is spending at least $100 million to elect lawmakers who aim to pass legislation that would set a single set of AI regulations across the US, overriding any restrictions placed on the technology by state governments.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
Watchdog group Public Citizen is raising alarms after tech giant Google on Monday revealed that a group of criminal hackers used artificial intelligence to detect a previously unidentified software vulnerability.
As reported by The New York Times, Google said that it had "high confidence" that the hackers used AI to discover and exploit the vulnerability.
While Google said that the attack had been thwarted, the Times noted that the company "did not say precisely when the thwarted attack happened, whom it was targeting, or which AI platform the hackers used."
While the discovery of so-called "zero-day vulnerabilities" were once a rare occurrence, the proliferation of AI models has made them much easier for hackers to detect. In fact, AI software vendor Anthropic earlier this year said that it had developed a model that was so good at exploiting these vulnerabilities that it would not be releasing it publicly.
John Hultquist, chief analyst at Google Threat Intelligence Group, said in an interview with Cyberscoop that this kind of AI-assisted attack "is probably the tip of the iceberg and it’s certainly not going to be the last" to occur.
“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist explained. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”
JB Branch, AI governance and technology policy counsel at Public Citizen, said the attempted AI exploit once against showed how reckless Big Tech has been in aggressively pushing this technology out the door.
"Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences," Branch said. "It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward."
Branch also said it was well past time for Congress to step in and slap strict guardrails on the development of AI.
"We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public," he said. "Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society."
While calls for more AI regulation have grown in recent months, Silicon Valley elites are planning to spend massive sums of money in this year's midterm elections to prevent candidates who support AI regulation from winning public office.
Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and other AI heavyweights—is spending at least $100 million to elect lawmakers who aim to pass legislation that would set a single set of AI regulations across the US, overriding any restrictions placed on the technology by state governments.
Watchdog group Public Citizen is raising alarms after tech giant Google on Monday revealed that a group of criminal hackers used artificial intelligence to detect a previously unidentified software vulnerability.
As reported by The New York Times, Google said that it had "high confidence" that the hackers used AI to discover and exploit the vulnerability.
While Google said that the attack had been thwarted, the Times noted that the company "did not say precisely when the thwarted attack happened, whom it was targeting, or which AI platform the hackers used."
While the discovery of so-called "zero-day vulnerabilities" were once a rare occurrence, the proliferation of AI models has made them much easier for hackers to detect. In fact, AI software vendor Anthropic earlier this year said that it had developed a model that was so good at exploiting these vulnerabilities that it would not be releasing it publicly.
John Hultquist, chief analyst at Google Threat Intelligence Group, said in an interview with Cyberscoop that this kind of AI-assisted attack "is probably the tip of the iceberg and it’s certainly not going to be the last" to occur.
“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist explained. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”
JB Branch, AI governance and technology policy counsel at Public Citizen, said the attempted AI exploit once against showed how reckless Big Tech has been in aggressively pushing this technology out the door.
"Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences," Branch said. "It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward."
Branch also said it was well past time for Congress to step in and slap strict guardrails on the development of AI.
"We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public," he said. "Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society."
While calls for more AI regulation have grown in recent months, Silicon Valley elites are planning to spend massive sums of money in this year's midterm elections to prevent candidates who support AI regulation from winning public office.
Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and other AI heavyweights—is spending at least $100 million to elect lawmakers who aim to pass legislation that would set a single set of AI regulations across the US, overriding any restrictions placed on the technology by state governments.