SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward," said one expert.
Watchdog group Public Citizen is raising alarms after tech giant Google on Monday revealed that a group of criminal hackers used artificial intelligence to detect a previously unidentified software vulnerability.
As reported by The New York Times, Google said that it had "high confidence" that the hackers used AI to discover and exploit the vulnerability.
While Google said that the attack had been thwarted, the Times noted that the company "did not say precisely when the thwarted attack happened, whom it was targeting, or which AI platform the hackers used."
While the discovery of so-called "zero-day vulnerabilities" were once a rare occurrence, the proliferation of AI models has made them much easier for hackers to detect. In fact, AI software vendor Anthropic earlier this year said that it had developed a model that was so good at exploiting these vulnerabilities that it would not be releasing it publicly.
John Hultquist, chief analyst at Google Threat Intelligence Group, said in an interview with Cyberscoop that this kind of AI-assisted attack "is probably the tip of the iceberg and it’s certainly not going to be the last" to occur.
“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist explained. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”
JB Branch, AI governance and technology policy counsel at Public Citizen, said the attempted AI exploit once against showed how reckless Big Tech has been in aggressively pushing this technology out the door.
"Cybersecurity experts are sounding the alarm, yet AI companies continue racing to release increasingly powerful models with little regard for the societal consequences," Branch said. "It is unthinkable and irresponsible to release technologies capable of destabilizing critical systems and then worry about the fallout afterward."
Branch also said it was well past time for Congress to step in and slap strict guardrails on the development of AI.
"We need enforceable AI regulations that require rigorous safety testing, independent review, and meaningful oversight before these systems ever reach the public," he said. "Regulators cannot remain in a perpetual game of catch-up while Big Tech gambles with the safety and stability of modern society."
While calls for more AI regulation have grown in recent months, Silicon Valley elites are planning to spend massive sums of money in this year's midterm elections to prevent candidates who support AI regulation from winning public office.
Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and other AI heavyweights—is spending at least $100 million to elect lawmakers who aim to pass legislation that would set a single set of AI regulations across the US, overriding any restrictions placed on the technology by state governments.
"AI is the most far-reaching and pivotal technological revolution in the history of humanity," notes the Sanders Institute. "The choices we make now will determine whether those changes make the world better or worse."
“You know you're in trouble when you can't describe reality without sounding crazy.”
That's how renowned author and activist Naomi Klein described society's relationship with rapidly—some say dangerously—evolving artificial intelligence technology during a Tuesday livestreamed panel discussion with Sen. Bernie Sanders (I-Vt.) and Rep. Ro Khanna (D-Calif.) hosted by the Sanders Institute.
Khanna and Klein are both fellows at the institute, cofounded by Sanders' (I-Vt.) wife and son, Jane O'Meara Sanders and David Driscoll. The Sanders Institute over recent years has convened an array of conferences and events focused on bringing together the best minds, top experts, and policy advocates on a host of issues.
“This AI and robotics revolution is the most sweeping technological change that the world has ever seen,” said Sanders. “People talk about the changes that the Industrial Revolution brought, which were profound. This is going to move a lot faster, with a lot more impact.”
“This revolution is being pushed by the wealthiest people in the world,” Sanders continued. “We’re talking about Elon Musk, Mark Zuckerberg, Jeff Bezos, Peter Thiel, and other multi-multi-billionaires who are spending hundreds and hundreds if not trillions of dollars combined trying to do the research and the implementation for these technologies.”
Turning to Khanna and Klein, the senator asked: “What are the motives of these guys? Do the American people think that Jeff Bezos and Elon Musk are sitting up nights saying, ‘Wow, we got this technology, we're going to improve life for working people?’”
Klein contended that “their motives are exactly the opposite, and they're very blunt about this, that they are in a race to reach something that they call AGI—artificial general intelligence—or even something beyond that, superintelligence.”
While agreeing with Sanders that AI will prove as transformative as the Industrial Revolution, Klein underscored one big difference between the two.
“Unlike the Industrial Revolution, which created huge numbers of jobs, the goal of this revolution is to eliminate jobs,” the Shock Doctrine author explained. “They've been absolutely transparent about what they want to achieve, which is a jobs apocalypse. They want to be free from their workers."
"They really don't like it when their workers organize and push back, whether in unions or outside of unions," Klein added. "And I think that's part of the appeal of AI for these guys, is the idea that they could become trillionaires with virtually no employees.”
Khanna, a potential 2028 presidential candidate who authored the book Progressive Capitalism: How to Make Tech Work for All of Us, has been a leading voice in the US House of Representatives on the issue of AI. The congressman pointed out that tech titans are “using technology to eliminate workers and maximize their profits, and if you look at the Industrial Revolution, for 60 years, worker wages fell… even as Britain became wealthy."
"And so the question, in my view, for AI is, are we going to let a few billionaires, trillionaires, call the shots, or are we going to make sure that the technology is actually used in any way to enhance workers, to enhance total productivity?” he asked.
Sanders noted that Bezos, Amazon's founder, "wants to raise $100 billion to do what? To automate factories in America and around the world."
"You know what that means? It means there will no longer be manufacturing jobs in the United States or in warehouses," the senator added. "He wants to get rid of the 600,000 Amazon workers and replace them with robots. Elon Musk is converting Tesla partially to a robotics company. He wants to produce a million robots a year… What do you think a robot is there for? It's to replace a union worker.”
Klein said that “if we lived in a world that took care of people… [where] if a job was eliminated, people had a guaranteed income, they knew that they had healthcare, they knew that they weren't going to get evicted, we'd be having a different conversation.”
It may be more than just jobs that are eliminated if humanity does not proceed with utmost caution.
Sanders cited AI pioneers like Geoffrey Hinton who have warned that superintelligent artificial intelligence could wipe out humanity. According to Hinton and others, the senator explained, “it’s not a question of if, it’s a question of when [AI] will become smarter than human beings, and the fear of these guys, which used to be science fiction, is that AI will essentially establish its independence from human control in order to protect itself... raising the possibility of horrific things happening.”
Khanna agreed that such an outcome is “a real risk" as countries remove guardrails to breakneck AI development with the excuse that if they don't do it, their rivals will—the same dangerous thinking that fueled the Cold War nuclear arms race between the US and Soviet Union.
“I don't know whether it will happen or not, but why would we not take every precaution to make sure it doesn’t?” the congressman asked. “And this is what I don't understand, when people say, ‘Well, we want to compete with other nations and have a race to the bottom."
While the specter of an AI apocalypse is growing, it remains much more a reflection of human anxieties that any sort of impending threat. The same cannot be said for lethal autonomous weapon systems—better known as killer robots, which are defined as arms that can operate without any meaningful human control.
Activists like those at the Campaign to Stop Killer Robots have long sounded the alarm on the development of weapons that can operate without human control. However, Khanna said that human decision-making alone “is not enough.”
“If AI is doing all the data analysis and saying, OK, here's the target, and you just have a human being saying, OK, I'm the one who's going to give the order [to attack]… well, there's a human last-minute judgment,” he said. "What's happened is just a dependence on these machines."
As an example, Khanna pointed to what he said was the US military's use of AI that “gave the target of the school” in southern Iran where 168 children and staff were massacred in a February 28 cruise missile strike.
Sanders raised the possibility that a future in which robots largely replace humans on the battlefield “makes it easier” for countries with such technology to wage war.
However, Khanna countered that such conflicts are “deeply asymmetrical," meaning that they're only "easier" for the more technologically advanced side.
“The United States can have drones and technology, and Israel can do that,” the congressman said. “But the people who were killed in what I call the genocide in Gaza, 70,000 people, they don't have that technology. The starving people in Cuba, because of our fuel blockade, don't have that technology. The people in Iran who were killed don't have that technology."
"So you have one side of political leadership in our country that doesn't have to worry as much about deaths for our people," he contended. "But then there’s no… moral deliberation about the dignity and worth of people who were killed.”
While such life-and-death matters are far removed from the reality of most Americans’ lives, the panelists gave examples of how AI is impacting everyday citizens and their privacy.
“We heard reports from a lot of people on the ground who were standing up to ICE,” Klein said, referring to the nationwide protests and individual acts of resistance against Immigration and Customs Enforcement and the Trump administration’s overall anti-immigrant blitz.
“They were having these very creepy experiences where ICE knew their names before they had said anything. They knew where they lived before they said anything," she added. "Scanning a face, scanning a license plate.”
Not everyone attends protests. But nearly everyone uses the internet and its accoutrements; most notably, social media. To that end, Khanna said that Big Tech isn’t just “taking our data, they’re trying to figure out what we think.”
“We've had no pushback to these companies,” he continued. “They have a profit motive to do this. They have a profit motive to get us as addictive to screen time as possible."
"They’re targeting young people… especially young girls that have had eating disorders... and suicidal thoughts because of the junk they've been fed," Khanna noted, calling the situation “a dereliction of Congress.”
“We have not passed any privacy legislation or restrictions really on social media companies as they've had total carte blanche to do what they want,” he said.
Sanders said that “to my mind, it is very clear why Congress is not dealing with this issue, and that is the power and the wealth of people who do not want us to deal with it.”
“To the best of my understanding, as of now, just for the 2026 elections, AI has already put $400 million into elections, and we've go… five to six more months to go,” he explained. “So let's assume that any candidate who gets up there and says, ‘You know, I have some real concerns about AI, let's slow it down, let's make it work for people rather than Elon Musk,’ that candidate will have billions of dollars thrown at him or her, which speaks to a corrupt campaign finance [system].”
Klein has similarly sounded the alarm about far-right tech oligarchs, including in a "must-read" essay with Astra Taylor about the fight against "end times fascism" published by The Guardian last year. The pair plans to release a related book in September.
“If we look at these Silicon Valley billionaires who lined up behind [President Donald] Trump during the election campaign… if you listen to what they have been saying about why they flipped, a lot of it was because there were some gentle regulations on crypto and AI during the Biden administration, including things like trying to figure out how to prevent AI from killing us all, and keeping it away from nuclear weapons," Klein said during Tuesday's panel. "Really sort of sensible policy… Apparently this was too much.”
While Congress fails to act, the people are stepping up.
“What we are seeing all over this country, from conservative areas, in progressive areas, [is] people saying, hey, thank you very much, we prefer not to have a data center in our community,” said Sanders—who recently introduced the Artificial Intelligence Data Center Moratorium Act with Rep. Alexandria Ocasio-Cortez (D-NY)—pointing to one example of people-powered victories.
“So this is really an unprecedented grassroots revolt, not only against the data centers, but against this whole idea... of very, very wealthy people operating in a secretive mode, pushing through what they want against the needs of ordinary people,” he added.
Klein said that “we need to have a national and international conversation, because these are global technologies, about how we can use these very powerful tools to make our lives better, to enhance life, to have a human-first AI policy.”
“And that means that we look at it holistically,” she continued. “We figure out how we do it in the least resource-intensive way to have the best results. And then it isn't about turning a bunch of guys into trillionaires.”
“It's about what kind of society we want to live in, how we want to treat each other, how we want to protect the natural world,” Klein added. “I think we should be having town hall conversations about it, and we might find out that we have more in common with our neighbors than we thought."
"Meta’s reported plans to introduce this technology into broadly available consumer products is a red line society must not cross."
The ACLU and a coalition of 75 other rights organizations on Tuesday issued a warning to tech giant Meta about its plan to install facial recognition technology onto its artificial intelligence-powered eyeglasses.
In a letter organized by the ACLU, the ACLU of Massachusetts, and the New York Civil Liberties Union (NYCLU), the groups said adding facial recognition technology to Meta's Ray-Ban and Oakley glasses would pose a grave threat to Americans' privacy.
"People should be able to move through their daily lives," the letter states, "without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors."
When it comes to specific dangers posed by embedding this technology into the company's products, the letter points to the potential for scammers to use it to "find out, quickly and in complete stealth, not just the name of the person sitting next to them on the subway—but their address, marital status, social media profiles, workplace, income, hobbies, health information, and habits."
Because of this, the letter says that "Meta’s reported plans to introduce this technology into broadly available consumer products is a red line society must not cross."
Blocking facial recognization technology from Meta glasses "is a prerequisite for a free and safe society," reads the letter.
The letter concludes with a series of demands, including that Meta stop any plans to attach facial recognition technology to its products; publicly disclose any past instances of Meta glasses being used for stalking and harassment; and reveal any "past or ongoing" discussions with law enforcement agencies such as US Immigration and Customs Enforcement about deploying the technology.
Cody Venzke, senior staff attorney working on surveillance, privacy, and technology issues for the ACLU, described facial recognition technology as "inherently invasive and unethical," and said adding it to a widely available consumer product "would vastly increase the risk of harm to individuals, families, and our democracy itself."
Kade Crockford, director of technology and justice programs at the ACLU of Massachusetts, argued that "the American people have not consented to this massive invasion of privacy," which is why Meta must abandon plans to deploy it.
"Stalkers and scammers would have a field day with this technology," Crockford said. "Federal agents could use it to harass and intimidate their critics. It’s dangerous and dystopian, and Meta must disavow it."