

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
While the company plans to challenge the decision, the state's attorney general said the figure "should send a clear message to Big Tech executives that no company is beyond the reach of the law."
Democratic New Mexico Attorney General Raúl Torrez and other child advocates on Tuesday celebrated a state jury's landmark verdict against Meta, despite the social media giant's plans to fight the decision requiring it to pay $375 million in civil penalties.
"The jury's verdict is a historic victory for every child and family who has paid the price for Meta's choice to put profits over kids' safety," said Torrez, who had accused the company behind Facebook, Instagram, and WhatsApp of violating the state's Unfair Practices Act. "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today, the jury joined families, educators, and child safety experts in saying enough is enough."
The Associated Press highlighted that "the landmark decision comes after a nearly seven-week trial, and as jurors in a federal court in California have been sequestered in deliberations for more than a week about whether Meta and YouTube should be liable in a similar case."
Torrez said that "New Mexico is proud to be the first state to hold Meta accountable in court for misleading parents, enabling child exploitation, and harming kids. In the next phase of this legal proceeding, we will seek additional financial penalties and court-mandated changes to Meta's platforms that offer stronger protections for children."
"The substantial damages the jury ordered Meta to pay should send a clear message to Big Tech executives that no company is beyond the reach of the law," he added. "Policymakers and law enforcement officials across the country can help make this verdict a turning point in the fight for children's safety. This is a watershed moment for every parent concerned about what could happen to their kids when they go online—and this victory belongs to them."
Josh Golin, executive director of the nonprofit Fairplay, welcomed the verdict. He said in a statement that "we've known for years that Meta enables the sexual exploitation of children. Now, that has been proven by a jury."
"As an organization that fights to protect children from Big Tech's deadly business model, Fairplay thanks Attorney General Torrez for his leadership in taking Meta to court," Golin continued. "Between this case and the ongoing trial in Los Angeles, parents, survivors, and state officials are doing their part to hold Big Tech accountable. Now, it's time for our leaders in the US Congress to get off the sidelines and pass the Senate's version of the Kids Online Safety Act to force these companies to change their addictive and dangerous product designs."
As Common Dreams has reported, while a diverse coalition supports the Kids Online Safety Act, civil rights groups have also expressed concerns about the legislation. Jenna Leventoff, senior policy counsel at the ACLU, warned last year that "the overbroad language in KOSA and similar legislation risks censoring everything from jokes and hyperbole to useful information about sex ed and suicide prevention."
Amid celebrations over the New Mexico jury's decision on Tuesday, Meta said in a statement that "we respectfully disagree with the verdict and will appeal. We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
NBC News noted that "separately, Meta is facing thousands of lawsuits accusing it and other social media companies of intentionally designing their products to be addictive to young people, leading to a nationwide mental health crisis. Some of the lawsuits, which have been filed in both state and federal courts, seek damages in the tens of billions of dollars, according to Meta’s filings with financial regulators."
"AI toys are not safe for kids," said a spokesperson for the children's advocacy group Fairplay. "They disrupt children's relationships, invade family privacy, displace key learning activities, and more."
As scrutiny of the dangers of artificial intelligence technology increases, Mattel is delaying the release of a toy collaboration it had planned with OpenAI for the holiday season, and children’s advocates hope the company will scrap the project for good.
The $6 billion company behind Barbie and Hot Wheels announced a partnership with OpenAI in June, promising, with little detail, to collaborate on "AI-powered products and experiences" to hit US shelves later in the year, an announcement that was met with fear about potential dangers to developing minds.
At the time, Robert Weissman, the president of the consumer advocacy group Public Citizen, warned: “Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
In November, dozens of child development experts and organizations signed an advisory from the group Fairplay warning parents not to buy the plushies, dolls, action figures, and robots that were coming embedded with "the very same AI systems that have produced unsafe, confusing, or harmful experiences for older kids and teens, including urging them to self harm or take their own lives."
In addition to fears about stunted emotional development, they said the toys also posed security risks: "Using audio, video, and even facial or gesture recognition, AI toys record and analyze sensitive family information even when they appear to be off... Companies can then use or sell this data to make the toys more addictive, push paid upgrades, or fuel targeted advertising directed at children."
The warnings have proved prescient in the months after Mattel's partnership was announced. As Victor Tangermann wrote for Futurism:
Toy makers have unleashed a flood of AI toys that have already been caught telling tykes how to find knives, light fires with matches, and giving crash courses in sexual fetishes.
Most recently, tests found that an AI toy from China is regaling children with Chinese Communist Party talking points, telling them that “Taiwan is an inalienable part of China” and defending the honor of the country’s president Xi Jinping.
As these horror stories rolled in, Mattel went silent for months on the future of its collaboration with Sam Altman's AI juggernaut. That is, until Monday, when it told Axios that the still-ill-defined product's rollout had been delayed.
A spokesperson for OpenAI confirmed, "We don't have anything planned for the holiday season," and added that when a product finally comes out, it will be aimed at older teenagers rather than young children.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, praised Mattel's decision to delay the release: "Given the threat that AI poses to children’s development, not to mention their safety and privacy, such caution is more than warranted," she said.
But she added that merely putting the rollout of AI toys on pause was not enough.
"We urge Mattel to make this delay permanent. AI toys are not safe for kids. They disrupt children's relationships, invade family privacy, displace key learning activities, and more," Franz said. "Mattel has an opportunity to be a real leader here—not in the race to the bottom to hook kids on AI—but in putting children’s needs first and scrapping its plans for AI toys altogether.”
"Online platforms use sophisticated and opaque techniques of data collection that endanger young people and put their healthy development at risk," said one children's advocate.
Child welfare advocates renewed calls for U.S. lawmakers to pass a pair of controversial bills aimed at protecting youth from Big Tech's "dangerous and unacceptable business practices" after the Federal Trade Commission published a report Thursday detailing how social media and streaming companies endanger children and teens who use their platforms.
The FTC staff report—entitled A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services—"shows how the tech industry's monetization of personal data has created a market for commercial surveillance, especially via social media and video streaming services, with inadequate guardrails to protect consumers."
The agency staff examined the practices of Meta platforms, which include Facebook, Instagram, and WhatsApp; YouTube; X, formerly known as Twitter; Snapchat; Reddit; Discord; Amazon, which owns the gaming site Twitch; and ByteDance, the owner of TikTok.
"The report finds that these companies engaged in mass data collection of their users and—in some cases—nonusers," Bureau of Consumer Protection Director Samuel Levine said in the paper. "It reveals that many companies failed to implement adequate safeguards against privacy risks. It sheds light on how companies used our personal data, from serving hypergranular targeted advertisements to powering algorithms that shape the content we see, often with the goal of keeping us hooked on using the service."
The publication "also finds that these practices pose unique risks to children and teens, with the companies having done little to respond effectively to the documented concerns that policymakers, psychologists, and parents have expressed over young people's physical and mental well-being."
FTC Chair Lina Khan said in a statement that "the report lays out how social media and video streaming companies harvest an enormous amount of Americans' personal data and monetize it to the tune of billions of dollars a year."
"While lucrative for the companies, these surveillance practices can endanger people's privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking," she added.
Researchers at Boston Children's Hospital and Harvard University published an analysis last December that revealed social media companies made nearly $11 billion in 2022 advertising revenue from U.S.-based users younger than 18.
According to the FTC report:
While the use of social media and digital technology can provide many positive opportunities for self-directed learning, forming community, and reducing isolation, it also has been associated with harms to physical and mental health, including through exposure to bullying, online harassment, child sexual exploitation, and exposure to content that may exacerbate mental health issues, such as the promotion of eating disorders, among other things.
The publication also flags "algorithms that may prioritize certain forms of harmful content, such as dangerous online challenges."
The report accuses social media companies of "willful blindness around child users" by claiming that there are no children on their platforms because their sites do not allow them to create accounts. This may constitute an attempt by the companies to avoid legal liability under the Children's Online Privacy Protection Act Rule (COPPA). Last December, Khan
proposed sweeping changes to COPPA to address the issue.
Josh Golin, executive director of Fairplay—a nonprofit organization "committed to helping children thrive in an increasingly commercialized, screen-obsessed culture"—said in a statement that "this report from the FTC is yet more proof that Big Tech's business model is harmful to children and teens."
"Online platforms use sophisticated and opaque techniques of data collection that endanger young people and put their healthy development at risk," Golin added. "We thank the FTC for listening to the concerns raised by Fairplay and a coalition of advocacy groups, and we call on Congress to pass COPPA 2.0, the Children and Teens' Online Privacy Protection Act, and KOSA, the Kids Online Safety Act, to better safeguard our children from these companies' dangerous and unacceptable business practices."
On Wednesday, the House Energy and Commerce Committee voted to advance COPPA 2.0 and KOSA, both of which were overwhelmingly passed by the Senate in July.
However, rights groups including the ACLU condemned KOSA, which the civil liberties organization warned "would violate the First Amendment by enabling the federal government to dictate what information people can access online and encourage social media platforms to censor protected speech."
In May 2023, U.S. Surgeon General Dr. Vivek Murthy issued an advisory on "the growing concerns about the effects of social media on youth mental health."
The White House simultaneously announced the creation of a federal task force "to advance the health, safety, and privacy of minors online with particular attention to preventing and mitigating the adverse health effects of online platforms."
Murthy has also called for tobacco-like warning labels on social media to address the platform's possible harms to children and teens.
According to a study published in January by the corporate power watchdog Ekō, in just one week that month there were more than 33 million posts on TikTok and Meta-owned Instagram "under hashtags housing problematic content directed at young users," including suicide, eating disorders, skin-whitening, and so-called "involuntary celibacy."