

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"AI toys are not safe for kids," said a spokesperson for the children's advocacy group Fairplay. "They disrupt children's relationships, invade family privacy, displace key learning activities, and more."
As scrutiny of the dangers of artificial intelligence technology increases, Mattel is delaying the release of a toy collaboration it had planned with OpenAI for the holiday season, and children’s advocates hope the company will scrap the project for good.
The $6 billion company behind Barbie and Hot Wheels announced a partnership with OpenAI in June, promising, with little detail, to collaborate on "AI-powered products and experiences" to hit US shelves later in the year, an announcement that was met with fear about potential dangers to developing minds.
At the time, Robert Weissman, the president of the consumer advocacy group Public Citizen, warned: “Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
In November, dozens of child development experts and organizations signed an advisory from the group Fairplay warning parents not to buy the plushies, dolls, action figures, and robots that were coming embedded with "the very same AI systems that have produced unsafe, confusing, or harmful experiences for older kids and teens, including urging them to self harm or take their own lives."
In addition to fears about stunted emotional development, they said the toys also posed security risks: "Using audio, video, and even facial or gesture recognition, AI toys record and analyze sensitive family information even when they appear to be off... Companies can then use or sell this data to make the toys more addictive, push paid upgrades, or fuel targeted advertising directed at children."
The warnings have proved prescient in the months after Mattel's partnership was announced. As Victor Tangermann wrote for Futurism:
Toy makers have unleashed a flood of AI toys that have already been caught telling tykes how to find knives, light fires with matches, and giving crash courses in sexual fetishes.
Most recently, tests found that an AI toy from China is regaling children with Chinese Communist Party talking points, telling them that “Taiwan is an inalienable part of China” and defending the honor of the country’s president Xi Jinping.
As these horror stories rolled in, Mattel went silent for months on the future of its collaboration with Sam Altman's AI juggernaut. That is, until Monday, when it told Axios that the still-ill-defined product's rollout had been delayed.
A spokesperson for OpenAI confirmed, "We don't have anything planned for the holiday season," and added that when a product finally comes out, it will be aimed at older teenagers rather than young children.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, praised Mattel's decision to delay the release: "Given the threat that AI poses to children’s development, not to mention their safety and privacy, such caution is more than warranted," she said.
But she added that merely putting the rollout of AI toys on pause was not enough.
"We urge Mattel to make this delay permanent. AI toys are not safe for kids. They disrupt children's relationships, invade family privacy, displace key learning activities, and more," Franz said. "Mattel has an opportunity to be a real leader here—not in the race to the bottom to hook kids on AI—but in putting children’s needs first and scrapping its plans for AI toys altogether.”
Amnesty International says Big Tech's consolidation of power "has profound implications for human rights, particularly the rights to privacy, nondiscrimination, and access to information."
One of the world's leading human rights groups, Amnesty International, is calling on governments worldwide to "break up with Big Tech" by reining in the growing influence of tech and social media giants.
A report published Thursday by Amnesty highlights five tech companies: Alphabet (Google), Meta, Microsoft, Amazon, and Apple, which Hannah Storey, an advocacy and policy adviser on technology and human rights at Amnesty, describes as "digital landlords who determine the shape and form of our online interaction."
These five companies collectively have billions of active users, which the report says makes them akin to "utility providers."
"This concentration of power," the report says, "has profound implications for human rights, particularly the rights to privacy, nondiscrimination, and access to information."
The report emphasizes the "pervasive surveillance" by Google and Meta, which profit from "harvesting and monetizing vast quantities of our personal data."
"The more data they collect, the more dominant they become, and the harder it is for competitors to challenge their position," the report says. "The result is a digital ecosystem where users have little meaningful choice or control over how their data is used."
Meanwhile, Google's YouTube, as well as Facebook and Instagram—two Meta products—function using algorithms "optimized for engagement and profit," which emphasize content meant to provoke strong emotions and outrage from users.
"In an increasingly polarized context, the report says, "this can contribute to the rapid spread of discriminatory speech and even incitement to violence, which has had devastating consequences in several crisis and conflict-affected areas."
The report notes several areas around the globe where social media algorithms amplified ethnic hatred. It cites past research showing how Facebook's algorithm helped to "supercharge" dehumanizing rhetoric that fueled the ethnic cleansing of the Rohingya in Myanmar and the violence in Ethiopia's Tigray War.
More broadly, it says, the ubiquity of these tech companies in users' lives gives them outsized influence over access to information.
"Social media platforms shape what millions of people see online, often through opaque algorithms that prioritize engagement over accuracy or diversity," it says. "Documented cases of content removal, inconsistent moderation, and algorithmic bias highlight the dangers of allowing a handful of companies to act as gatekeepers of the digital public sphere."
Amnesty argues that international human rights law requires governments worldwide to intervene to protect their people from abuses by tech companies.
"States and competition authorities should use competition laws as part of their human rights toolbox," it says. "States should investigate and sanction anti-competitive behaviours that harm human rights, prevent regulatory capture, and prevent harmful monopolies from forming."
Amnesty also calls on these states to consider the possible human rights impacts of artificial intelligence, which it describes as the "next phase" of Big Tech's growing dominance, with Microsoft, Amazon, and Google alone controlling 60% of the global cloud computing market.
"Addressing this dominance is critical, not only as a matter of market fairness but as a pressing human rights issue," Storey said. "Breaking up these tech oligarchies will help create an online environment that is fair and just."
DOGE officials have been responsible for "serious data security lapses" that risk the safety "of over 300 million Americans' Social Security data," the whistleblower complaint said.
A new whistleblower complaint is alleging that employees of the Department of Government Efficiency put Americans' Social Security data at risk by uploading it to a cloud server that was vulnerable to hacking.
The whistleblower complaint, which was filed by the Government Accountability Project on behalf of Social Security Administration (SSA) chief data officer Charles Borges, alleges that Department of Government Efficiency (DOGE) officials have been responsible for "serious data security lapses" that "risk the security of over 300 million Americans' Social Security data."
The report contends that Borges has evidence of a wide array of wrongdoing by DOGE employees, including "apparent systemic data security violations, uninhibited administrative access to highly sensitive production environments, and potential violations of internal SSA security protocols and federal privacy laws by DOGE personnel."
At the heart of Borges's complaint is an effort by DOGE employees to make "a live copy of the country's Social Security information in a cloud environment" that "apparently lacks any security oversight from SSA or tracking to determine who is accessing or has accessed the copy of this data."
Should hackers gain access to this copy of Social Security data, the report warns, it could result in identity theft on an unprecedented scale and lead to the loss of crucial food and healthcare benefits for millions of Americans. The report states that the government may also have to give every American a new Social Security number "at great cost."
As noted by The New York Times, Borges did not document any confirmed breaches of the cloud system set up by the DOGE employees, but he did say that there have been "no verified audit or oversight mechanisms" to monitor DOGE's use of the data.
Andrea Meza, director of campaigns for Government Accountability Project and attorney for Borges, said that her client felt he could not remain silent given the risk to Americans' personal information.
"Mr. Borges raised concerns to his supervisors about his discovery of a disturbing pattern of questionable and risky security access and administrative misconduct that impacts some of the public's most sensitive data," she said. "Out of a sense of urgency and duty to the American public, he is now raising the alarm to Congress and the Office of Special Counsel, urging them to engage in immediate oversight to address these serious concerns."
While DOGE was established with the stated goal of protecting Americans from waste and fraud in the US government—including at the SSA, which President Donald Trump has baselessly claimed wrongly sent benefits to hundreds of thousands of undocumented immigrants—former Labor Secretary Robert Reich said DOGE is "potentially exposing Americans to more" fraud.
Alex Lawson, executive director of the advocacy organization Social Security Works, blasted DOGE and its former leader, Tesla and SpaceX owner Elon Musk, for what he described as blatant theft.
" Elon Musk and his DOGE minions stole the American people's private Social Security data," said Lawson. "This was no accident. They come from Silicon Valley, where tech bros are furiously competing to see whose AI can gobble up the most data. Musk's nearly $300 million in contributions to Trump's campaign, along with buying Twitter and making it a de facto Trump campaign apparatus, were an investment—and now all of us are paying the price."
The official Social Security Works account on X delivered a terse three-word response to the whistleblower report: "This is criminal."