

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Sharing this private taxpayer data creates chaos, and as we’ve seen this past year, if federal agents use this private information to track down individuals, it can endanger lives.”
Privacy officials at the Internal Revenue Service were sidelined in discussions last year about the Department of Homeland Security's demand for taxpayer data about people the Trump administration believed were not authorized to be in the US, and a court filing by the IRS Wednesday may have illustrated some of the officials' worst fears about the plan.
According to a sworn declaration by Dottie Romo, the chief risk and control officer at the IRS, the agency improperly shared private taxpayer data on thousands of people with immigration enforcement officers.
The data was shared, the Washington Post reported, even in cases in which DHS officials could not provide data needed to positively identify a specific individual.
Two federal courts have preliminarily found that the IRS and DHS acted unlawfully when they moved forward with the plan to share taxpayer addresses and have blocked the agencies from continuing the arrangement. A third case filed by Public Citizen Litigation Group, Alan Morrison, and Raise the Floor Alliance is on appeal in the DC Circuit.
But before the agreement was enjoined by the courts, DHS requested the addresses of 1.2 million people from the IRS, and the tax agency sent data on 47,000 people in response.
Thousands of people's confidential data was erroneously included in the release, sources who were familiar with the matter told the Post.
Despite Romo's sworm statement saying an error had been made by the agencies, a DHS spokesperson continued to defend the data sharing agreement, telling the Post that “the government is finally doing what it should have all along.”
“Information sharing across agencies is essential to identify who is in our country, including violent criminals, determine what public safety and terror threats may exist so we can neutralize them, scrub these individuals from voter rolls, and identify what public benefits these aliens are using at taxpayer expense,” the spokesperson told the newspaper. “With the IRS information specifically, DHS plans to focus on enforcing long-neglected criminal laws that apply to illegal aliens."
Records have shown that a large majority of people who have been arrested by US Immigration and Customs Enforcement and other federal agents since President Donald Trump began his mass deportation and detention campaign have not had criminal records, despite the administration's persistent claims that officers are arresting "the worst of the worst" violent criminals.
Undocumented immigrants are also statistically less likely than citizens to commit crimes, and have not been found to attempt to participate in US elections illegally.
When DHS initially asked for taxpayer data last year, IRS employees denounced the request as "Nixonian" and warned that a data sharing arrangement would be illegal. Providing taxpayer information to third parties is punishable by civil and criminal penalties, and an IRS contractor, Charles Littlejohn, was sentenced to five years in prison after pleading guilty in 2023 to leaking the tax returns of Trump and other wealthy people.
Trump has sued the IRS for $10 billion in damages due to the leak.
Romo on Wednesday did not state whether the IRS would inform individuals whose confidential data was sent to immigration officials; they could be entitled to financial compensation.
Dean Baker, senior economist at the Center for Economic and Policy Research, noted that judging from Trump's lawsuit against the IRS, "thousands of trillions of dollars" should be paid to those affected by the data breach.
Lisa Gilbert, co-president of Public Citizen, said the "breach of confidential information was part of the reason we filed our lawsuit in the first place."
"Sharing this private taxpayer data creates chaos," she said, "and as we’ve seen this past year, if federal agents use this private information to track down individuals, it can endanger lives.”
"AI toys are not safe for kids," said a spokesperson for the children's advocacy group Fairplay. "They disrupt children's relationships, invade family privacy, displace key learning activities, and more."
As scrutiny of the dangers of artificial intelligence technology increases, Mattel is delaying the release of a toy collaboration it had planned with OpenAI for the holiday season, and children’s advocates hope the company will scrap the project for good.
The $6 billion company behind Barbie and Hot Wheels announced a partnership with OpenAI in June, promising, with little detail, to collaborate on "AI-powered products and experiences" to hit US shelves later in the year, an announcement that was met with fear about potential dangers to developing minds.
At the time, Robert Weissman, the president of the consumer advocacy group Public Citizen, warned: “Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
In November, dozens of child development experts and organizations signed an advisory from the group Fairplay warning parents not to buy the plushies, dolls, action figures, and robots that were coming embedded with "the very same AI systems that have produced unsafe, confusing, or harmful experiences for older kids and teens, including urging them to self harm or take their own lives."
In addition to fears about stunted emotional development, they said the toys also posed security risks: "Using audio, video, and even facial or gesture recognition, AI toys record and analyze sensitive family information even when they appear to be off... Companies can then use or sell this data to make the toys more addictive, push paid upgrades, or fuel targeted advertising directed at children."
The warnings have proved prescient in the months after Mattel's partnership was announced. As Victor Tangermann wrote for Futurism:
Toy makers have unleashed a flood of AI toys that have already been caught telling tykes how to find knives, light fires with matches, and giving crash courses in sexual fetishes.
Most recently, tests found that an AI toy from China is regaling children with Chinese Communist Party talking points, telling them that “Taiwan is an inalienable part of China” and defending the honor of the country’s president Xi Jinping.
As these horror stories rolled in, Mattel went silent for months on the future of its collaboration with Sam Altman's AI juggernaut. That is, until Monday, when it told Axios that the still-ill-defined product's rollout had been delayed.
A spokesperson for OpenAI confirmed, "We don't have anything planned for the holiday season," and added that when a product finally comes out, it will be aimed at older teenagers rather than young children.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, praised Mattel's decision to delay the release: "Given the threat that AI poses to children’s development, not to mention their safety and privacy, such caution is more than warranted," she said.
But she added that merely putting the rollout of AI toys on pause was not enough.
"We urge Mattel to make this delay permanent. AI toys are not safe for kids. They disrupt children's relationships, invade family privacy, displace key learning activities, and more," Franz said. "Mattel has an opportunity to be a real leader here—not in the race to the bottom to hook kids on AI—but in putting children’s needs first and scrapping its plans for AI toys altogether.”
Amnesty International says Big Tech's consolidation of power "has profound implications for human rights, particularly the rights to privacy, nondiscrimination, and access to information."
One of the world's leading human rights groups, Amnesty International, is calling on governments worldwide to "break up with Big Tech" by reining in the growing influence of tech and social media giants.
A report published Thursday by Amnesty highlights five tech companies: Alphabet (Google), Meta, Microsoft, Amazon, and Apple, which Hannah Storey, an advocacy and policy adviser on technology and human rights at Amnesty, describes as "digital landlords who determine the shape and form of our online interaction."
These five companies collectively have billions of active users, which the report says makes them akin to "utility providers."
"This concentration of power," the report says, "has profound implications for human rights, particularly the rights to privacy, nondiscrimination, and access to information."
The report emphasizes the "pervasive surveillance" by Google and Meta, which profit from "harvesting and monetizing vast quantities of our personal data."
"The more data they collect, the more dominant they become, and the harder it is for competitors to challenge their position," the report says. "The result is a digital ecosystem where users have little meaningful choice or control over how their data is used."
Meanwhile, Google's YouTube, as well as Facebook and Instagram—two Meta products—function using algorithms "optimized for engagement and profit," which emphasize content meant to provoke strong emotions and outrage from users.
"In an increasingly polarized context, the report says, "this can contribute to the rapid spread of discriminatory speech and even incitement to violence, which has had devastating consequences in several crisis and conflict-affected areas."
The report notes several areas around the globe where social media algorithms amplified ethnic hatred. It cites past research showing how Facebook's algorithm helped to "supercharge" dehumanizing rhetoric that fueled the ethnic cleansing of the Rohingya in Myanmar and the violence in Ethiopia's Tigray War.
More broadly, it says, the ubiquity of these tech companies in users' lives gives them outsized influence over access to information.
"Social media platforms shape what millions of people see online, often through opaque algorithms that prioritize engagement over accuracy or diversity," it says. "Documented cases of content removal, inconsistent moderation, and algorithmic bias highlight the dangers of allowing a handful of companies to act as gatekeepers of the digital public sphere."
Amnesty argues that international human rights law requires governments worldwide to intervene to protect their people from abuses by tech companies.
"States and competition authorities should use competition laws as part of their human rights toolbox," it says. "States should investigate and sanction anti-competitive behaviours that harm human rights, prevent regulatory capture, and prevent harmful monopolies from forming."
Amnesty also calls on these states to consider the possible human rights impacts of artificial intelligence, which it describes as the "next phase" of Big Tech's growing dominance, with Microsoft, Amazon, and Google alone controlling 60% of the global cloud computing market.
"Addressing this dominance is critical, not only as a matter of market fairness but as a pressing human rights issue," Storey said. "Breaking up these tech oligarchies will help create an online environment that is fair and just."