SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Between yesterday’s historic verdict in New Mexico and today’s ruling in California, it is clear that Big Tech’s free rein to addict and harm children is over," said one campaigner.
A Los Angeles jury on Wednesday found that Meta and Google acted negligently by harming a child user with their social media platforms' addictive design features in a landmark verdict that came on the heels of Tuesday's $375 million fine imposed on Meta by New Mexico jurors.
The California jury—which deliberated for 40 hours over nine days—ordered the companies to pay $3 million in compensatory civil damages to a now-20-year-old woman, known in court as Kaley G.M., for pain and suffering and other damages.
Meta—the parent company of Facebook, Instagram, and WhatsApp—must pay 70%, while Google, the Alphabet subsidiary that bought YouTube, will pay the rest.
The jury also found the companies acted fraudulently and with malice, and will impose an additional fine.
Kaley's legal team successfully argued that the social media companies designed products that are as addictive as cigarettes or online casinos, and that site features like infinite scrolling and algorithmic recommendations caused her anxiety and depression. Attorneys said Kaley began viewing YouTube videos when she was 6 years old and started using Instagram at age 9.
Attorney Mark Lanier called YouTube Kaley's "gateway" to social media addiction. Later, features like Instagram's "beauty filters" made her feel "fat" and unattractive.
Still, Kaley was hooked, testifying in court last month: “Every single day I was on it, all day long. I just can’t be without it.”
Kaley's lawyers submitted evidence including internal communications in which officials at the two companies privately acknowledged their products' addictiveness.
"If we want to win big with teens, we must bring them in as tweens," one YouTube strategy memo states.
A communication from an Instagram employee says: “We’re basically pushers... We’re causing reward deficit disorder, because people are binging on Instagram so much they can’t feel the reward.”
Meta CEO Mark Zuckerberg says, “Kids under 13 aren’t allowed on our services.” That's a lie. 2015: Internal review found 4 million kids on Instagram.2017: Meta employees, we're "going after <13 year olds” – Zuckerberg had been talking about this “for a while.”
[image or embed]
— Tech Oversight Project (@techoversight.bsky.social) February 20, 2026 at 10:18 AM
Kaley's attorneys said in a statement following Wednesday's verdict: "For years, social media companies have profited from targeting children while concealing their addictive and dangerous design features. Today’s verdict is a referendum—from a jury, to an entire industry—on that accountability.”
One of those attorneys, Joseph VanZandt, told The New York Times that “this is the first time in history a jury has heard testimony by executives and seen internal documents that we believe prove these companies chose profits over children."
As Courthouse News Service reported:
Kaley is the first of nearly 2,500 plaintiffs in a consolidated case in Southern California suing four tech companies—Google, Meta, TikTok, and Snap—who say their social media and streaming platforms were designed in ways that caused or worsened depression, anxiety, and body dysmorphia in minors.
TikTok and Snap settled with Kaley in the weeks before her bellwether trial but remain defendants in the broader consolidated litigation. The trial’s outcome could help spur a global settlement, though eight more bellwether trials are being prepared, with the next one scheduled to start this summer.
A Meta spokesperson told Courthouse News Service that “we respectfully disagree with the verdict and are evaluating our legal options.”
Mark Zuckerberg, Meta's CEO and co-founder, insisted during the trial that Instagram is “a good thing that has value in people’s lives.”
Appeals by the companies could drag on for years, and, as Fox Business correspondent Susan Li noted on X, "if it’s just money that they have to pay, in the end it’s just a speeding ticket as they have deep pockets of cash."
Wednesday's verdict comes amid numerous pending lawsuits against social media companies and follows Tuesday's $375 million penalty imposed on Meta by a New Mexico jury, which found that the company violated the state's Unfair Practices Act by misleading users and exposing children to harm on its platforms.
Child welfare and digital rights advocates hailed Wednesday's verdict, which The Tech Oversight Project, an advocacy group, called "an earthquake for Big Tech."
"After years of gaslighting from companies like Google and Meta, new evidence and testimony have pulled back the curtain and validated the harms young people and parents have been telling the world about for years," the group's president, Sacha Haworth, said in a statement.
"These products were purposefully designed to harm [and] addict millions of young people, and lead to lifelong mental health consequences," Haworth added. "This trial was proof that if you put CEOs like Mark Zuckerberg on the stand before a judge and jury of their peers, the tech industry’s wanton disregard for people will be on full display."
Alix Fraser, vice president of advocacy at Issue One, said, “Today’s verdict is a victory for young people, their families, and all Americans, marking a critical turning point in the fight to hold Big Tech accountable."
"The message is clear: The industry cannot continue to treat the youngest generation as its guinea pigs without consequences," he continued. "The trial process exposed how these platforms are designed, how risks to young users are understood internally, and how those risks have too often been outweighed by the pursuit of growth and profit."
"Today’s verdict builds on that truth. It affirms that young people are not test subjects for unproven products that prioritize profit at all cost," Fraser added. “No other industry enjoys the level of legal protection tech companies have relied on. This verdict begins to crack that shield and move us closer to a system where accountability is the norm, not the exception."
Josh Golin, executive director of the children's advocacy group Fairplay, said, “We are so pleased that a jury has confirmed what Fairplay and the survivor parents we work with have been saying for years: Social media companies like Meta and YouTube deliberately design their products to addict kids."
"Between yesterday’s historic verdict in New Mexico and today’s ruling in California, it is clear that Big Tech’s free rein to addict and harm children is over," he added.
JB Branch, the artificial intelligence and technology policy counsel at the consumer advocacy group Public Citizen, said in a statement that "the parallels to Big Tobacco litigation are becoming harder to ignore."
"Like tobacco companies before them, social media firms built massive business models around dependency, denied or minimized mounting evidence of harm, and resisted meaningful safeguards while millions of young people were exposed to escalating risks," Branch explained. "Infinite scroll, push notifications, algorithmic amplification, and behavioral targeting were commercial design choices built to maximize attention, addiction, and revenue."
“Now more than ever, it’s time for Congress and federal regulators to establish enforceable safeguards for youth online while preserving the right of states to adopt stronger standards, including stronger product safety requirements, transparency obligations, limits on manipulative design practices, and accountability mechanisms for platforms whose business models depend on prolonged youth engagement," Branch added.
While many campaigners are urging congressional lawmakers to pass the Senate version of the Kids Online Safety Act, civil rights groups including the ACLU argue that KOSA is overbroad and poses serious risks of censorship of free speech.
"It's time to build communities, not data centers," said one local activist.
The New Brunswick, New Jersey City Council voted Wednesday to cancel plans to construct an artificial intelligence data center and instead build a new public park where the 27,000-square foot facility would have gone.
Artificial intelligence data centers—which house the servers and other infrastructure needed to train and power AI models—have major environmental and climate impacts, as they consume massive amounts of electricity and water, as well as rare earth metals and other resources.
According to New Brunswick Patch, hundreds of people packed into Wednesday evening's city hall meeting to voice concerns that the proposed data center would send their electricity and water bills skyrocketing, and that the facility would harm the environment.
"Many people did not want this in their neighborhood," New Brunswick NAACP president Bruce Morgan said during the council meeting. "We don't want these kinds of centers that's going to take resources from the community."
The site of the nixed data center, 100 Jersey Avenue, is already slated for development including 600 new apartments—10% of which will be affordable housing units—and warehouses for startups and other small businesses. Now, thanks to Wednesday's vote, a park is on the agenda too.
"This is great news, no data center," New Brunswick resident Anne Norris told Patch.
"My kids went through the public school system; we didn't pay for lunch because we have so many families under the poverty line," Norris said before taking aim at what she said was the dearth of affordable housing approved for the site.
"Given the economic status of the people who live in New Brunswick, I don't think 10% is really sufficient," she contended.
Following the council meeting, jubilant residents celebrated the data center's cancellation, chanting slogans including, "The people united will never be defeated!"
"We say a big 'fuck you' to Big Tech!" local organizer Ben Dziobek shouted to the crowd. "We say a big 'fuck you' to private equity! And it's time to build communities, not data centers."
If Democrats want to regain trust ahead of the 2026 elections, they need to show they are willing to take on Big Tech with the urgency that everyday Americans are demanding.
One year ago, Mark Zuckerberg, Elon Musk, and Jeff Bezos got front-row seats at President Donald Trump’s inauguration. The images of CEOs enjoying better seats than congressional leaders foreshadowed exactly how much access and influence Big Tech would wield in the Trump White House.
Since entering office, Trump has repeatedly signaled deference to a small group of powerful technology executives, aided by advisors like AI czar David Sacks who have spent their careers profiting from the industry. With Trump’s blessing, companies like NVIDIA are now poised to profit from sales of advanced chips to China, America’s foremost strategic competitor. That choice exposes a fundamental contradiction at the heart of the administration’s AI policy: prioritizing short-term corporate gains over long-term public interests.
In December, Trump signed an executive order threatening states for enacting AI safety laws without offering a credible federal framework to replace them. It was yet another misuse of executive power—and an industry giveaway disguised as a competitiveness strategy. By threatening states for acting while offering no federal safeguards in return, the order attempts to clear the field for companies that have spent years lobbying against meaningful accountability.
While Republicans move to shield companies from accountability and block reasonable state action without offering meaningful protections, Democrats can articulate a smarter approach.
Supporters argue that preemption is necessary to help the United States compete with China. But if that’s true, why is the president offering the Chinese Communist Party access to superior American technology and a clear path to win the AI race?
That contradiction hasn’t gone unnoticed, even inside Trump’s own coalition. Indeed, most Americans continue to express deep concern about Trump’s growing alignment with Silicon Valley.
Still, Trump has only doubled down, pushing a vision of global “tech dominance” with little regard for the real-world consequences of unprecedented AI investment. Even Republicans who were once vocal critics of Big Tech are now taking money from Meta and other companies to accelerate AI on industry-friendly terms.
For Democrats, this should be a moment of clarity—and a moment to lead. While many lawmakers have raised legitimate concerns about AI’s risks, the party’s response has too often leaned on commissions, task forces, and studies when the public is asking for clear rules and accountability.
Democrats must ask themselves: if Big Tech is already working overtime to block meaningful safeguards, why not meet the moment by standing clearly on the side of consumers, parents, and workers? Voters are asking for real leadership, but all they are seeing is a familiar pattern: billion-dollar companies consolidating power, writing the rules, and dodging accountability, leaving children, workers, and democratic institutions to deal with the consequences.
The 2024 election underscored a deeper challenge for Democrats than economic uncertainty or flawed candidates. Many voters struggled to see a coherent vision for the future under Democratic leadership. That vacuum has allowed Republicans to posture as pro-consumer and pro-family while quietly shielding powerful companies from accountability.
The debate over AI offers Democrats a chance to do better. While Republicans move to shield companies from accountability and block reasonable state action without offering meaningful protections, Democrats can articulate a smarter approach: clear expectations for safety; real liability when technology causes harm; serious preparation for economic disruption; and responsible planning for AI’s massive energy demands.
AI is no longer an abstract idea; its impacts are already being felt. But without clear rules, it risks reshaping our economy, labor markets, and democratic institutions in ways that undermine security, opportunity, and trust. When elected leaders prioritize the agendas of their corporate executives over the long-term public interest, trust erodes—not just in institutions, but in innovation itself.
That erosion of trust is already visible. Workers worry about job displacement, recent graduates struggle to enter a rapidly-changing workforce, and parents fear how algorithmic manipulation and AI-generated deepfakes will shape their children’s reality. These concerns aren’t partisan. This shared national anxiety goes to the heart of the American experiment.
If Democrats want to regain trust ahead of the 2026 elections, they need to show they are willing to take on Big Tech with the urgency that everyday Americans are demanding. That means recognizing that AI isn’t just another talking point, and pursuing strong, enforceable standards now—so its extraordinary potential strengthens the middle class, improves our children’s future, and reinforces democratic institutions rather than undermining them.