

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Just four major data broker breaches in recent years have cost US consumers over $20 billion, according to a Thursday report from a key leader in Congress that argues "additional action is needed to protect Americans from scams."
Sen. Maggie Hassan (D-NH), ranking member of the congressional Joint Economic Committee (JEC), launched a sweeping investigation into financial scams last July. As part of it, she's examined data brokers, which collect and sell individuals' personal information. These companies often operate with limited transparency, her report explains, making it "more difficult for individuals to secure their information online and, ultimately, protect themselves from the growing threat of scams."
"Data brokers, for example, can enable scams by making consumers' personal information available to bad actors, who can then use details like Social Security numbers, home addresses, or banking information to develop customized and convincing scams," the report explains. "In some cases, data brokers have allegedly sold this information directly to scammers; in others, cyber hacks of data brokers have exposed individuals' data to uncontrolled circulation online."
Last August, after Wired reported that some data brokers took steps to hide their opt-out pages, Hassan issued investigative requests to Comscore, Findem, IQVIA Digital, Telesign, and 6Sense Insights. The report states that all of the companies but Findem responded with "actions to make their opt-out options more accessible to consumers and other parties," which "included removing 'no index' code that had blocked opt-out pages from search engine results, adding opt-out links in more prominent locations, and publishing blog content explaining how people can exercise their privacy rights."
"Notably," the report continues, "Findem did not respond to the ranking member's requests or written outreach from committee staff and has not removed the 'no index' code from its opt-out page—raising serious concerns about its responsiveness to opt-out requests and commitment to data privacy."
While recognizing the other companies for their positive responses, Hassan's report also stresses that more must be done. For instance, she requested information about efforts "to audit or assess the visibility of opt-out options or the success rates of opt-out requests," and "only 6sense stated that it contracts with third-party auditors to conduct both of these assessments."
Highlighting the need for further action, Hassan's staff estimated that identity theft stemming from four large data broker breaches—Equifax in 2017, impacting 147 million US residents; Exactis in 2018, impacting 230 million; National Public in 2023, impacting 270 million; and TransUnion in 2025, impacting 4.4 million—cost American consumers $20.9 billion.
"As international criminal syndicates increasingly use scams to target Americans, data brokers shouldn't make it harder for people to protect themselves," Hassan said in a statement. "This report shows the scope of the threat that people face from data broker breaches and underscores the importance of protecting Americans' private data."
She added that "it is encouraging that after we launched our investigation, many companies took steps to improve opt-out options for Americans, which in turn can help more consumers keep their information out of the wrong hands."
As a related webpage from the Electronic Privacy Information Center details: "There is no federal law in the United States that adequately regulates the data broker industry. As a result, private companies invade our private lives, spy on our families, and gather our most intimate facts, on a mass scale, for profit. EPIC supports state and federal legislative efforts that set limits on data brokers’ collection, use, retention, and disclosure of personal data."
In recent years, members of Congress have introduced various legislative proposals aimed at reining in data brokers—including in the Security and Freedom Enhancement (SAFE) Act, introduced on Monday. The bipartisan bill would, among other things, close the so-called "data broker loophole" that, as Sens. Dick Durbin (D-Ill.) and Mike Lee (R-Utah) put it, "intelligence and law enforcement agencies use to buy their way around the Fourth Amendment" to the US Constitution.
There are some limits that have passed, including in Protecting Americans’ Data from Foreign Adversaries Act of 2024. Earlier this month, the Federal Trade Commission sent letters reminding 13 companies of their obligations to comply with the PADFAA, which "prohibits data brokers from selling, licensing, renting, trading, transferring, releasing, disclosing, providing access to, or otherwise making available personally identifiable sensitive data of a United States individual to any foreign adversary country or any entity that is controlled by a foreign adversary."
However, as Lartease Tiffith, an expert at American and George Mason universities, laid out in an article for Just Security last November, while Congress enacted the PADFAA "with the right goal," the law, as written, "could penalize legitimate US companies for routine global operations while failing to deliver the targeted national security tool Congress intended."
"Sharing this private taxpayer data creates chaos, and as we’ve seen this past year, if federal agents use this private information to track down individuals, it can endanger lives.”
Privacy officials at the Internal Revenue Service were sidelined in discussions last year about the Department of Homeland Security's demand for taxpayer data about people the Trump administration believed were not authorized to be in the US, and a court filing by the IRS Wednesday may have illustrated some of the officials' worst fears about the plan.
According to a sworn declaration by Dottie Romo, the chief risk and control officer at the IRS, the agency improperly shared private taxpayer data on thousands of people with immigration enforcement officers.
The data was shared, the Washington Post reported, even in cases in which DHS officials could not provide data needed to positively identify a specific individual.
Two federal courts have preliminarily found that the IRS and DHS acted unlawfully when they moved forward with the plan to share taxpayer addresses and have blocked the agencies from continuing the arrangement. A third case filed by Public Citizen Litigation Group, Alan Morrison, and Raise the Floor Alliance is on appeal in the DC Circuit.
But before the agreement was enjoined by the courts, DHS requested the addresses of 1.2 million people from the IRS, and the tax agency sent data on 47,000 people in response.
Thousands of people's confidential data was erroneously included in the release, sources who were familiar with the matter told the Post.
Despite Romo's sworm statement saying an error had been made by the agencies, a DHS spokesperson continued to defend the data sharing agreement, telling the Post that “the government is finally doing what it should have all along.”
“Information sharing across agencies is essential to identify who is in our country, including violent criminals, determine what public safety and terror threats may exist so we can neutralize them, scrub these individuals from voter rolls, and identify what public benefits these aliens are using at taxpayer expense,” the spokesperson told the newspaper. “With the IRS information specifically, DHS plans to focus on enforcing long-neglected criminal laws that apply to illegal aliens."
Records have shown that a large majority of people who have been arrested by US Immigration and Customs Enforcement and other federal agents since President Donald Trump began his mass deportation and detention campaign have not had criminal records, despite the administration's persistent claims that officers are arresting "the worst of the worst" violent criminals.
Undocumented immigrants are also statistically less likely than citizens to commit crimes, and have not been found to attempt to participate in US elections illegally.
When DHS initially asked for taxpayer data last year, IRS employees denounced the request as "Nixonian" and warned that a data sharing arrangement would be illegal. Providing taxpayer information to third parties is punishable by civil and criminal penalties, and an IRS contractor, Charles Littlejohn, was sentenced to five years in prison after pleading guilty in 2023 to leaking the tax returns of Trump and other wealthy people.
Trump has sued the IRS for $10 billion in damages due to the leak.
Romo on Wednesday did not state whether the IRS would inform individuals whose confidential data was sent to immigration officials; they could be entitled to financial compensation.
Dean Baker, senior economist at the Center for Economic and Policy Research, noted that judging from Trump's lawsuit against the IRS, "thousands of trillions of dollars" should be paid to those affected by the data breach.
Lisa Gilbert, co-president of Public Citizen, said the "breach of confidential information was part of the reason we filed our lawsuit in the first place."
"Sharing this private taxpayer data creates chaos," she said, "and as we’ve seen this past year, if federal agents use this private information to track down individuals, it can endanger lives.”
"AI toys are not safe for kids," said a spokesperson for the children's advocacy group Fairplay. "They disrupt children's relationships, invade family privacy, displace key learning activities, and more."
As scrutiny of the dangers of artificial intelligence technology increases, Mattel is delaying the release of a toy collaboration it had planned with OpenAI for the holiday season, and children’s advocates hope the company will scrap the project for good.
The $6 billion company behind Barbie and Hot Wheels announced a partnership with OpenAI in June, promising, with little detail, to collaborate on "AI-powered products and experiences" to hit US shelves later in the year, an announcement that was met with fear about potential dangers to developing minds.
At the time, Robert Weissman, the president of the consumer advocacy group Public Citizen, warned: “Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
In November, dozens of child development experts and organizations signed an advisory from the group Fairplay warning parents not to buy the plushies, dolls, action figures, and robots that were coming embedded with "the very same AI systems that have produced unsafe, confusing, or harmful experiences for older kids and teens, including urging them to self harm or take their own lives."
In addition to fears about stunted emotional development, they said the toys also posed security risks: "Using audio, video, and even facial or gesture recognition, AI toys record and analyze sensitive family information even when they appear to be off... Companies can then use or sell this data to make the toys more addictive, push paid upgrades, or fuel targeted advertising directed at children."
The warnings have proved prescient in the months after Mattel's partnership was announced. As Victor Tangermann wrote for Futurism:
Toy makers have unleashed a flood of AI toys that have already been caught telling tykes how to find knives, light fires with matches, and giving crash courses in sexual fetishes.
Most recently, tests found that an AI toy from China is regaling children with Chinese Communist Party talking points, telling them that “Taiwan is an inalienable part of China” and defending the honor of the country’s president Xi Jinping.
As these horror stories rolled in, Mattel went silent for months on the future of its collaboration with Sam Altman's AI juggernaut. That is, until Monday, when it told Axios that the still-ill-defined product's rollout had been delayed.
A spokesperson for OpenAI confirmed, "We don't have anything planned for the holiday season," and added that when a product finally comes out, it will be aimed at older teenagers rather than young children.
Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, praised Mattel's decision to delay the release: "Given the threat that AI poses to children’s development, not to mention their safety and privacy, such caution is more than warranted," she said.
But she added that merely putting the rollout of AI toys on pause was not enough.
"We urge Mattel to make this delay permanent. AI toys are not safe for kids. They disrupt children's relationships, invade family privacy, displace key learning activities, and more," Franz said. "Mattel has an opportunity to be a real leader here—not in the race to the bottom to hook kids on AI—but in putting children’s needs first and scrapping its plans for AI toys altogether.”