SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"AI is the most far-reaching and pivotal technological revolution in the history of humanity," notes the Sanders Institute. "The choices we make now will determine whether those changes make the world better or worse."
“You know you're in trouble when you can't describe reality without sounding crazy.”
That's how renowned author and activist Naomi Klein described society's relationship with rapidly—some say dangerously—evolving artificial intelligence technology during a Tuesday livestreamed panel discussion with Sen. Bernie Sanders (I-Vt.) and Rep. Ro Khanna (D-Calif.) hosted by the Sanders Institute.
Khanna and Klein are both fellows at the institute, cofounded by Sanders' (I-Vt.) wife and son, Jane O'Meara Sanders and David Driscoll. The Sanders Institute over recent years has convened an array of conferences and events focused on bringing together the best minds, top experts, and policy advocates on a host of issues.
“This AI and robotics revolution is the most sweeping technological change that the world has ever seen,” said Sanders. “People talk about the changes that the Industrial Revolution brought, which were profound. This is going to move a lot faster, with a lot more impact.”
“This revolution is being pushed by the wealthiest people in the world,” Sanders continued. “We’re talking about Elon Musk, Mark Zuckerberg, Jeff Bezos, Peter Thiel, and other multi-multi-billionaires who are spending hundreds and hundreds if not trillions of dollars combined trying to do the research and the implementation for these technologies.”
Turning to Khanna and Klein, the senator asked: “What are the motives of these guys? Do the American people think that Jeff Bezos and Elon Musk are sitting up nights saying, ‘Wow, we got this technology, we're going to improve life for working people?’”
Klein contended that “their motives are exactly the opposite, and they're very blunt about this, that they are in a race to reach something that they call AGI—artificial general intelligence—or even something beyond that, superintelligence.”
While agreeing with Sanders that AI will prove as transformative as the Industrial Revolution, Klein underscored one big difference between the two.
“Unlike the Industrial Revolution, which created huge numbers of jobs, the goal of this revolution is to eliminate jobs,” the Shock Doctrine author explained. “They've been absolutely transparent about what they want to achieve, which is a jobs apocalypse. They want to be free from their workers."
"They really don't like it when their workers organize and push back, whether in unions or outside of unions," Klein added. "And I think that's part of the appeal of AI for these guys, is the idea that they could become trillionaires with virtually no employees.”
Khanna, a potential 2028 presidential candidate who authored the book Progressive Capitalism: How to Make Tech Work for All of Us, has been a leading voice in the US House of Representatives on the issue of AI. The congressman pointed out that tech titans are “using technology to eliminate workers and maximize their profits, and if you look at the Industrial Revolution, for 60 years, worker wages fell… even as Britain became wealthy."
"And so the question, in my view, for AI is, are we going to let a few billionaires, trillionaires, call the shots, or are we going to make sure that the technology is actually used in any way to enhance workers, to enhance total productivity?” he asked.
Sanders noted that Bezos, Amazon's founder, "wants to raise $100 billion to do what? To automate factories in America and around the world."
"You know what that means? It means there will no longer be manufacturing jobs in the United States or in warehouses," the senator added. "He wants to get rid of the 600,000 Amazon workers and replace them with robots. Elon Musk is converting Tesla partially to a robotics company. He wants to produce a million robots a year… What do you think a robot is there for? It's to replace a union worker.”
Klein said that “if we lived in a world that took care of people… [where] if a job was eliminated, people had a guaranteed income, they knew that they had healthcare, they knew that they weren't going to get evicted, we'd be having a different conversation.”
It may be more than just jobs that are eliminated if humanity does not proceed with utmost caution.
Sanders cited AI pioneers like Geoffrey Hinton who have warned that superintelligent artificial intelligence could wipe out humanity. According to Hinton and others, the senator explained, c“it’s not a question of if, it’s a question of when [AI] will become smarter than human beings, and the fear of these guys, which used to be science fiction, is that AI will essentially establish its independence from human control in order to protect itself... raising the possibility of horrific things happening.”
Khanna agreed that such an outcome is “a real risk" as countries remove guardrails to breakneck AI development with the excuse that if they don't do it, their rivals will—the same dangerous thinking that fueled the Cold War nuclear arms race between the US and Soviet Union.
“I don't know whether it will happen or not, but why would we not take every precaution to make sure it doesn’t?” the congressman asked. “And this is what I don't understand, when people say, ‘Well, we want to compete with other nations and have a race to the bottom."
While the specter of an AI apocalypse is growing, it remains much more a reflection of human anxieties that any sort of impending threat. The same cannot be said for lethal autonomous weapon systems—better known as killer robots, which are defined as arms that can operate without any meaningful human control.
Activists like those at the Campaign to Stop Killer Robots have long sounded the alarm on the development of weapons that can operate without human control. However, Khanna said that human decision-making alone “is not enough.”
“If AI is doing all the data analysis and saying, OK, here's the target, and you just have a human being saying, OK, I'm the one who's going to give the order [to attack]… well, there's a human last-minute judgment,” he said. "What's happened is just a dependence on these machines."
As an example, Khanna pointed to what he said was the US military's use of AI that “gave the target of the school” in southern Iran where 168 children and staff were massacred in a February 28 cruise missile strike.
Sanders raised the possibility that a future in which robots largely replace humans on the battlefield “makes it easier” for countries with such technology to wage war.
However, Khanna countered that such conflicts are “deeply asymmetrical," meaning that they're only "easier" for the more technologically advanced side.
“The United States can have drones and technology, and Israel can do that,” the congressman said. “But the people who were killed in what I call the genocide in Gaza, 70,000 people, they don't have that technology. The starving people in Cuba, because of our fuel blockade, don't have that technology. The people in Iran who were killed don't have that technology."
"So you have one side of political leadership in our country that doesn't have to worry as much about deaths for our people," he contended. "But then there’s no… moral deliberation about the dignity and worth of people who were killed.”
While such life-and-death matters are far removed from the reality of most Americans’ lives, the panelists gave examples of how AI is impacting everyday citizens and their privacy.
“We heard reports from a lot of people on the ground who were standing up to ICE,” Klein said, referring to the nationwide protests and individual acts of resistance against Immigration and Customs Enforcement and the Trump administration’s overall anti-immigrant blitz.
“They were having these very creepy experiences where ICE knew their names before they had said anything. They knew where they lived before they said anything," she added. "Scanning a face, scanning a license plate.”
Not everyone attends protests. But nearly everyone uses the internet and its accoutrements; most notably, social media. To that end, Khanna said that Big Tech isn’t just “taking our data, they’re trying to figure out what we think.”
“We've had no pushback to these companies,” he continued. “They have a profit motive to do this. They have a profit motive to get us as addictive to screen time as possible."
"They’re targeting young people… especially young girls that have had eating disorders... and suicidal thoughts because of the junk they've been fed," Khanna noted, calling the situation “a dereliction of Congress.”
“We have not passed any privacy legislation or restrictions really on social media companies as they've had total carte blanche to do what they want,” he said.
Sanders said that “to my mind, it is very clear why Congress is not dealing with this issue, and that is the power and the wealth of people who do not want us to deal with it.”
“To the best of my understanding, as of now, just for the 2026 elections, AI has already put $400 million into elections, and we've go… five to six more months to go,” he explained. “So let's assume that any candidate who gets up there and says, ‘You know, I have some real concerns about AI, let's slow it down, let's make it work for people rather than Elon Musk,’ that candidate will have billions of dollars thrown at him or her, which speaks to a corrupt campaign finance [system].”
Klein has similarly sounded the alarm about far-right tech oligarchs, including in a "must-read" essay with Astra Taylor about the fight against "end times fascism" published by The Guardian last year. The pair plans to release a related book in September.
“If we look at these Silicon Valley billionaires who lined up behind [President Donald] Trump during the election campaign… if you listen to what they have been saying about why they flipped, a lot of it was because there were some gentle regulations on crypto and AI during the Biden administration, including things like trying to figure out how to prevent AI from killing us all, and keeping it away from nuclear weapons," Klein said during Tuesday's panel. "Really sort of sensible policy… Apparently this was too much.”
While Congress fails to act, the people are stepping up.
“What we are seeing all over this country, from conservative areas, in progressive areas, [is] people saying, hey, thank you very much, we prefer not to have a data center in our community,” said Sanders—who recently introduced the Artificial Intelligence Data Center Moratorium Act with Rep. Alexandria Ocasio-Cortez (D-NY)—pointing to one example of people-powered victories.
“So this is really an unprecedented grassroots revolt, not only against the data centers, but against this whole idea... of very, very wealthy people operating in a secretive mode, pushing through what they want against the needs of ordinary people,” he added.
Klein said that “we need to have a national and international conversation, because these are global technologies, about how we can use these very powerful tools to make our lives better, to enhance life, to have a human-first AI policy.”
“And that means that we look at it holistically,” she continued. “We figure out how we do it in the least resource-intensive way to have the best results. And then it isn't about turning a bunch of guys into trillionaires.”
“It's about what kind of society we want to live in, how we want to treat each other, how we want to protect the natural world,” Klein added. “I think we should be having town hall conversations about it, and we might find out that we have more in common with our neighbors than we thought."
"Meta’s reported plans to introduce this technology into broadly available consumer products is a red line society must not cross."
The ACLU and a coalition of 75 other rights organizations on Tuesday issued a warning to tech giant Meta about its plan to install facial recognition technology onto its artificial intelligence-powered eyeglasses.
In a letter organized by the ACLU, the ACLU of Massachusetts, and the New York Civil Liberties Union (NYCLU), the groups said adding facial recognition technology to Meta's Ray-Ban and Oakley glasses would pose a grave threat to Americans' privacy.
"People should be able to move through their daily lives," the letter states, "without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors."
When it comes to specific dangers posed by embedding this technology into the company's products, the letter points to the potential for scammers to use it to "find out, quickly and in complete stealth, not just the name of the person sitting next to them on the subway—but their address, marital status, social media profiles, workplace, income, hobbies, health information, and habits."
Because of this, the letter says that "Meta’s reported plans to introduce this technology into broadly available consumer products is a red line society must not cross."
Blocking facial recognization technology from Meta glasses "is a prerequisite for a free and safe society," reads the letter.
The letter concludes with a series of demands, including that Meta stop any plans to attach facial recognition technology to its products; publicly disclose any past instances of Meta glasses being used for stalking and harassment; and reveal any "past or ongoing" discussions with law enforcement agencies such as US Immigration and Customs Enforcement about deploying the technology.
Cody Venzke, senior staff attorney working on surveillance, privacy, and technology issues for the ACLU, described facial recognition technology as "inherently invasive and unethical," and said adding it to a widely available consumer product "would vastly increase the risk of harm to individuals, families, and our democracy itself."
Kade Crockford, director of technology and justice programs at the ACLU of Massachusetts, argued that "the American people have not consented to this massive invasion of privacy," which is why Meta must abandon plans to deploy it.
"Stalkers and scammers would have a field day with this technology," Crockford said. "Federal agents could use it to harass and intimidate their critics. It’s dangerous and dystopian, and Meta must disavow it."
"Between yesterday’s historic verdict in New Mexico and today’s ruling in California, it is clear that Big Tech’s free rein to addict and harm children is over," said one campaigner.
A Los Angeles jury on Wednesday found that Meta and Google acted negligently by harming a child user with their social media platforms' addictive design features in a landmark verdict that came on the heels of Tuesday's $375 million fine imposed on Meta by New Mexico jurors.
The California jury—which deliberated for 40 hours over nine days—ordered the companies to pay $3 million in compensatory civil damages to a now-20-year-old woman, known in court as Kaley G.M., for pain and suffering and other damages.
Meta—the parent company of Facebook, Instagram, and WhatsApp—must pay 70%, while Google, the Alphabet subsidiary that bought YouTube, will pay the rest.
The jury also found the companies acted fraudulently and with malice, and will impose an additional fine.
Kaley's legal team successfully argued that the social media companies designed products that are as addictive as cigarettes or online casinos, and that site features like infinite scrolling and algorithmic recommendations caused her anxiety and depression. Attorneys said Kaley began viewing YouTube videos when she was 6 years old and started using Instagram at age 9.
Attorney Mark Lanier called YouTube Kaley's "gateway" to social media addiction. Later, features like Instagram's "beauty filters" made her feel "fat" and unattractive.
Still, Kaley was hooked, testifying in court last month: “Every single day I was on it, all day long. I just can’t be without it.”
Kaley's lawyers submitted evidence including internal communications in which officials at the two companies privately acknowledged their products' addictiveness.
"If we want to win big with teens, we must bring them in as tweens," one YouTube strategy memo states.
A communication from an Instagram employee says: “We’re basically pushers... We’re causing reward deficit disorder, because people are binging on Instagram so much they can’t feel the reward.”
Meta CEO Mark Zuckerberg says, “Kids under 13 aren’t allowed on our services.” That's a lie. 2015: Internal review found 4 million kids on Instagram.2017: Meta employees, we're "going after <13 year olds” – Zuckerberg had been talking about this “for a while.”
[image or embed]
— Tech Oversight Project (@techoversight.bsky.social) February 20, 2026 at 10:18 AM
Kaley's attorneys said in a statement following Wednesday's verdict: "For years, social media companies have profited from targeting children while concealing their addictive and dangerous design features. Today’s verdict is a referendum—from a jury, to an entire industry—on that accountability.”
One of those attorneys, Joseph VanZandt, told The New York Times that “this is the first time in history a jury has heard testimony by executives and seen internal documents that we believe prove these companies chose profits over children."
As Courthouse News Service reported:
Kaley is the first of nearly 2,500 plaintiffs in a consolidated case in Southern California suing four tech companies—Google, Meta, TikTok, and Snap—who say their social media and streaming platforms were designed in ways that caused or worsened depression, anxiety, and body dysmorphia in minors.
TikTok and Snap settled with Kaley in the weeks before her bellwether trial but remain defendants in the broader consolidated litigation. The trial’s outcome could help spur a global settlement, though eight more bellwether trials are being prepared, with the next one scheduled to start this summer.
A Meta spokesperson told Courthouse News Service that “we respectfully disagree with the verdict and are evaluating our legal options.”
Mark Zuckerberg, Meta's CEO and co-founder, insisted during the trial that Instagram is “a good thing that has value in people’s lives.”
Appeals by the companies could drag on for years, and, as Fox Business correspondent Susan Li noted on X, "if it’s just money that they have to pay, in the end it’s just a speeding ticket as they have deep pockets of cash."
Wednesday's verdict comes amid numerous pending lawsuits against social media companies and follows Tuesday's $375 million penalty imposed on Meta by a New Mexico jury, which found that the company violated the state's Unfair Practices Act by misleading users and exposing children to harm on its platforms.
Child welfare and digital rights advocates hailed Wednesday's verdict, which The Tech Oversight Project, an advocacy group, called "an earthquake for Big Tech."
"After years of gaslighting from companies like Google and Meta, new evidence and testimony have pulled back the curtain and validated the harms young people and parents have been telling the world about for years," the group's president, Sacha Haworth, said in a statement.
"These products were purposefully designed to harm [and] addict millions of young people, and lead to lifelong mental health consequences," Haworth added. "This trial was proof that if you put CEOs like Mark Zuckerberg on the stand before a judge and jury of their peers, the tech industry’s wanton disregard for people will be on full display."
Alix Fraser, vice president of advocacy at Issue One, said, “Today’s verdict is a victory for young people, their families, and all Americans, marking a critical turning point in the fight to hold Big Tech accountable."
"The message is clear: The industry cannot continue to treat the youngest generation as its guinea pigs without consequences," he continued. "The trial process exposed how these platforms are designed, how risks to young users are understood internally, and how those risks have too often been outweighed by the pursuit of growth and profit."
"Today’s verdict builds on that truth. It affirms that young people are not test subjects for unproven products that prioritize profit at all cost," Fraser added. “No other industry enjoys the level of legal protection tech companies have relied on. This verdict begins to crack that shield and move us closer to a system where accountability is the norm, not the exception."
Josh Golin, executive director of the children's advocacy group Fairplay, said, “We are so pleased that a jury has confirmed what Fairplay and the survivor parents we work with have been saying for years: Social media companies like Meta and YouTube deliberately design their products to addict kids."
"Between yesterday’s historic verdict in New Mexico and today’s ruling in California, it is clear that Big Tech’s free rein to addict and harm children is over," he added.
JB Branch, the artificial intelligence and technology policy counsel at the consumer advocacy group Public Citizen, said in a statement that "the parallels to Big Tobacco litigation are becoming harder to ignore."
"Like tobacco companies before them, social media firms built massive business models around dependency, denied or minimized mounting evidence of harm, and resisted meaningful safeguards while millions of young people were exposed to escalating risks," Branch explained. "Infinite scroll, push notifications, algorithmic amplification, and behavioral targeting were commercial design choices built to maximize attention, addiction, and revenue."
“Now more than ever, it’s time for Congress and federal regulators to establish enforceable safeguards for youth online while preserving the right of states to adopt stronger standards, including stronger product safety requirements, transparency obligations, limits on manipulative design practices, and accountability mechanisms for platforms whose business models depend on prolonged youth engagement," Branch added.
While many campaigners are urging congressional lawmakers to pass the Senate version of the Kids Online Safety Act, civil rights groups including the ACLU argue that KOSA is overbroad and poses serious risks of censorship of free speech.