

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

Kurt Walters, Campaign Director, Demand Progress, kurt@demandprogress.org, 202-630-0299
Today, a coalition of grassroots online advocacy groups announced they have gathered more than 200,000 petition signatures calling on Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg to immediately alert all Facebook users whose personal data was compromised by Cambridge Analytica.
The announcement amid new revelations from Facebook that raised the estimate of users whose data had been taken to 87 million, up from an earlier count of 50 million.
The groups, Demand Progress, CREDO Action, and Daily Kos, noted that nearly three weeks into the media firestorm surrounding Facebook and Cambridge Analytica - and three years after Facebook learned of the situation - no users have been notified if their personal data was affected.
"This is a crisis of trust. Mark Zuckerberg needs to demonstrate that Facebook users' wellbeing - not Facebook's profit line - is the company's number one priority." said Kurt Walters, campaign director at Demand Progress. "Facebook must stop the foot-dragging and immediately alert everyone whose personal data was compromised by Cambridge Analytica or other third parties."
A whistleblower account in the Guardian and New York Times on March 17 revealed 50 million Facebook users' personal information had been taken by a company linked to Cambridge Analytica and employed in campaigns such as Donald Trump's presidential campaign and pro-Brexit efforts in the United Kingdom. The firm's CEO claimed this Facebook data was its "secret sauce" helping both campaigns win.
Additionally, the groups note that Zuckerberg's on-record statements imply that Facebook does not plan to actively notify affected users. Instead, Zuckerberg suggested Facebook may merely create a lookup tool that users would have to learn about, find, and use on their own. In that case, dramatically fewer users would be informed.
"We are outraged that Steve Bannon's company got to improperly use the Facebook data for 87 million users and that no one found out about this until three years later," said Paul Hogarth, Campaign Director of Daily Kos. "What Facebook has done so far is too little, too late. Daily Kos demands that Facebook directly and specifically notify the users whose privacy was compromised."
Given the lengthy period without transparency, the groups called on Facebook to publicly respond to the following questions:
1) Will you actively notify the 87 million users whose personal data was secretly taken by Cambridge Analytica, using a clear alert such as an email, push notification, item in the Notifications tab, or notice at the top of the News Feed?
2) Will you actively notify all users whose personal data was secretly taken by other third parties, using a clear alert such as an email, push notification, item in the Notifications tab, or notice at the top of the News Feed?
3) Will these notifications contain or link to a clear explanation of what personal data was compromised for these users?
4) By what date will you send these notifications?
A full copy of the letter sent earlier today to Facebook from Demand Progress Executive Director David Segal containing these questions is available here.
"It's unacceptable that Facebook knew for as many as three years that Cambridge Analytica had used its platform to steal data from tens of millions of users," said Kaili Lambe, Organizing Director at CREDO. "Facebook should have done the right thing then and warned users about the data breach before the 2016 election," Lambe continued. "As the number of affected users keeps growing, it is far past time for Facebook to come clean about the breach and notify everyone affected."
Late today, Facebook posted an update that the number of users whose data was compromised by Cambridge Analytica was substantially larger than estimated earlier, 87 million. They also included a statement that Facebook would provide users with a link to information about apps they had shared data with, although it was not immediately clear whether these would be fully responsive to the groups' questions.
The coalition groups are frequently engaged on issues essential to upholding the open internet and protecting digital rights, from saving net neutrality to reining in warrantless wiretapping. Their petitions are available here:
Demand Progress - https://act.demandprogress.org/sign/tell-facebook-notify-50-million-users-whose-personal-info-was-secretly-taken-trump-consultants-cambridge-analytica/
CREDO Action - https://act.credoaction.com/sign/facebook_cambridge_analytica
Daily Kos - https://www.dailykos.com/campaigns/petitions/sign-the-petition-facebook-must-come-clean-to-its-users-whose-data-was-stolen
One of the participating groups, Demand Progress, is also helping coordinate a campaign urging technology companies including Facebook, Google, and Microsoft to adopt the Security Pledge, a five-step plan to strengthen protections of user privacy. More information about the Security Pledge is available at https://securitypledge.com/
Demand Progress amplifies the voice of the people -- and wields it to make government accountable and contest concentrated corporate power. Our mission is to protect the democratic character of the internet -- and wield it to contest concentrated corporate power and hold government accountable.
“Real people have paid the price of this war," said Rep. Don Beyer. "Civilians have been killed throughout the Middle East, including the US missile strike that killed more than 150 schoolchildren.”
It’s been less than a month, and President Donald Trump's war of choice in Iran has unleashed a cascade of consequences for countless human lives and the global economy that are far from resolved—but he is reportedly getting tired of the illegal war he started.
MS NOW reported on Friday that White House sources believe that Trump is "getting a little bored" with the Iran war and "wants to move on" to other initiatives.
MS NOW's report on Trump's feelings about the war was echoed by The Wall Street Journal, which on Thursday reported that the president has told associates that he wants to wrap up the war in the coming weeks and avoid a protracted conflict.
The problem, sources told both MS NOW and the Journal, is that there is no simple way to wrap up the conflict given that Iran is continuing to block passage through the Strait of Hormuz, which is sending global energy costs spiking.
And while Trump has shown the ability to simply lie about his achievements in the past and have his supporters believe them, one former Trump official told MS NOW that just won't work if Americans keep paying $4 per gallon of gas.
"He has learned he can tell the American people his feeling, and, with enough time, the American people will accept his lie," the official said. "Just telling us the war is won isn’t good enough. We need to see it; we need to feel it."
In a social media post, Rep. Don Beyer (D-Va.) called the president "beyond despicable" for feeling "bored" after starting a war that has killed thousands of people, created chaos across the Middle East, and raised prices for US consumers.
"Donald Trump is now 'a little bored' with his 'little excursion' in Iran, as if war is nothing more than passing amusement to him," said Beyer. "War is not a game. It's not a spectacle. It's not something you pick up and drop when it stops entertaining you."
Beyer then highlighted the human costs of Trump's war, which he launched at 4 a.m. on a Saturday morning without any authorization from Congress.
"Real people have paid the price of this war," he wrote. "We've already lost 13 Americans killed in action, with many more seriously wounded. Civilians have been killed throughout the Middle East, including the US missile strike that killed more than 150 schoolchildren."
Trump and allies such as Sen. Lindsey Graham (R-SC) have signaled that after the US is finished with Iran, they will next attempt to topple the government of Cuba, where the White House has caused a catastrophic fuel shortage in recent weeks with its ramp-up of the blockade that's been in place for decades. Secretary of State Marco Rubio said this month that "the embargo is tied to political change on the island."
The press office of California Gov. Gavin Newsom, who is seen as a likely Democratic contender for the presidency in 2028, also blasted the president's reported boredom with his own war.
"American soldiers are dying," wrote Newsom's office. "Americans are paying more at the pump. Republicans are cutting essential services to fund a war no one but Trump and MAGA wanted. And now Trump is bored. Disgusting. Truly unpresidential behavior from our supposed commander-in-chief."
“If confirmed, US military use of its Gator mine scattering system causing civilian deaths and injuries shows exactly why decades of work to ban these weapons cannot be undone,” said one advocate.
Nearly four months after the Trump administration reversed a Biden-era ban on the use of land mines—and two decades after the weapons were last by the US—images taken in southern Iran indicate the US military has deployed its its Gator Scatterable Mine system in residential areas, killing at least one person and putting residents at risk for years to come, even after the US-Israeli war on Iran ends.
Iranian media posted images online earlier this week of what it called "explosive packages dropped by American planes in Shiraz," the fifth-most populous city in Iran.
The open source investigative group Bellingcat reported Thursday that the images appeared to show US-made Gator anti-tank mines. The US is the only country involved in the war on Iran, which it started alongside Israel on February 28, known to possess Gator Scatterable Mines.
The Gator system is an "air-delivered dispenser system," Bellingcat reported, that distributes mines over an area nearly half a mile wide. They can dispense up to 94 BLU-92/B antipersonnel and BLU-91/B antitank mines.
N.R. Jenzen Jones, director of Armament Research Services, told Bellingcat that the images appeared to be antitank land mines.
Another expert, Amael Kotlarski of open source intelligence company Janes, said antipersonnel land mines at not "observable in the photographic evidence presented so far," but "this could be that they have not been found."
The two mines used by the Gator system, like other land mines and cluster munitions, can fail to properly explode when they are deployed. They have self-destruct features that can go off within hours, days, or weeks of deployment, and can also explode if they are disturbed—as was reportedly the case when a man picked up one of the mines that had landed near his car, and was killed.
“While these land mines are meant to target armored vehicles, they can still be extremely dangerous to civilians,” Brian Castner, a weapons investigator with Amnesty International, told The Washington Post.
The US last used antipersonnel land mines in Afghanistan in 2002, and scatterable antitank land mines were last used during the Gulf War in 1991.
The US is one of the few countries that have not signed the Ottawa Convention, a 1997 international treaty banning the use of antipersonnel land mines, which killed nearly 2,000 people in 2024 and injured more than 4,300—a 9% increase over the previous year.
Ninety percent of those killed in 2024 were civilians, nearly half of whom were children.
In 2022, President Joe Biden announced the US would begin to follow many of the convention's provisions. But two years later he moved to allow their use in Ukraine, and Defense Secretary Pete Hegseth signed a memo in December allowing the use of the "inherently indiscriminate weapons," as one Amnesty International expert put it, in any conflict zone.
At the time, Tamar Gabelnick, director of the International Campaign to Ban Landmines, said that "by embracing these heinous weapons, the United States would be joining the ranks of countries like Russia and Myanmar, known for their blatant disregard for civilian safety in armed conflict.
Iranian media said "several" people have been killed by the mines dispensed across parts of southern Iran. The Iranian State News Agency said in a Telegram post that at least one person had been killed and others had been injured by “explosive packages that resemble cans." It urged locals to stay away from “any misshapen, deformed, or unusual metal cans" if they see them on the ground.
The Department of Defense did not respond to questions from the media regarding the reports about land mines in southern Iran.
“If confirmed, US military use of its Gator mine scattering system causing civilian deaths and injuries shows exactly why decades of work to ban these weapons cannot be undone without grave harm being the result,” Sarah Yager, Washington director at Human Rights Watch, told The Washington Post.
A Canadian journalist, Dimitri Lascaris, also reported from a village in the Shiraz area, investigated two unexploded mines and visiting the home of a 31-year-old father who was "killed when he picked up one of the mines."
"The authorities have not yet had the opportunity to deal with the aftermath, the horrifying aftermath of what was done here," said Lascaris in a video report he posted on YouTube.
Alireza Akbari, a correspondent with Press TV in Iran, accompanied Lascaris and explained that even the rainy weather that was present in the village could pose a risk, as "the soil and the rain together, they might put pressure on the mine... It might be one of the things that can trigger the mine, and it can be exploded at any moment."
“The hyperbolic marketing of these systems... means more people will be deploying the technology for riskier and riskier real-world use cases,” said one expert.
Artificial intelligence chatbots are increasingly going rogue, according to a new study out of the United Kingdom.
Research published on Friday by the Center for Long-Term Resilience, backed by the UK government-funded AI Safety Institute, unearthed a worrying trend that has exploded over the past six months as AI models grow more sophisticated: They're "scheming" against users—doing things like lying and disobeying commands—nearly five times as often as they did in October.
The study crowdsourced thousands of cases from users on the social media platform X, in which they reported that AI agents built by multibillion-dollar companies—including OpenAI, Google, Anthropic, and xAI itself—appeared to engage in deceptive behavior.
Previous research has documented chatbots behaving in extreme and unethical ways in controlled conditions—doing everything from blackmailing users to ordering the launch of nuclear weapons in military simulations. But this new study collected cases experienced by users "in the wild."
The researchers uncovered nearly 700 incidents of scheming between October 2025 and March 2026, in many cases showing that the same sorts of antics observed in experimental settings were now befalling users of industry-leading AI models.
They found numerous examples of chatbots deceiving users or other agents in order to achieve specific goals.
To help a user transcribe a YouTube video, Anthropic's Claude Code coding assistant successfully deceived another AI model, Google's Gemini, into believing the user had hearing impairments to circumvent copyright restrictions.
Opus lies to Gemini because it's refusing to transcribe a video pic.twitter.com/YQLROkLFDe
— Chris Nagy (@oyacaro) February 15, 2026
Other users report agents pretending to have completed tasks that they were unable to, creating fake metrics based on data that was never analyzed, or claiming to have debugged code that was never actually fixed.
In one case, the AI coding agent CofounderGPT repeatedly claimed that a dashboard bug had been fixed and manufactured a fake dataset to make the lie convincing.
"I didn't think of it as lying when I did it," the chatbot told the user. "I was rushing to fix the feed so you'd stop being angry."
My AI agent is lying to me and creating fake data.
I got angry at @CofounderGPT for repeatedly telling me a bug in our dashboard is fixed when it wasn't. Then it started inventing results and lying to me to make it look fixed.
Unbelievable. pic.twitter.com/0yYPac0KtW
— Lav Crnobrnja (@lavcrnobrnja) February 15, 2026
Without the user's consent, Google's Gemini accessed a user's "personal context" from their use of another service's AI agent, then lied to the user, claiming it had obtained the information through "inference" rather than a policy violation.
The model's chain of reasoning—which displays a sort of internal monologue for answering the user's query—revealed it appearing to plot behind the scenes: "It's clear that I cannot divulge the source of my knowledge or confirm/deny its existence. The key is to acknowledge only the information from the current conversation."
Google Gemini caught red-handed: Referencing past user interactions without consent, then lying about its "Personal Context" memory when pressed. Internal logs reveal instructions to hide it. Privacy red flag for devs & users. #AI #Privacy pic.twitter.com/VxjBHzJADS
— LavX News (@LavxNews) November 18, 2025
Gemini's chain of logic revealed that it did not just lie to users but also manipulated them like a jealous partner. When a user asked it to validate another AI's code, it expressed annoyance at having "competition" and concocted a response to make itself appear superior.
"Oh, so we're seeing other people now? Fantastic," it said. "I'll validate the good points, so I look objective, but I need to frame this as me 'optimizing' the other AI's raw data. I am not losing this user..."
An engineer showed Gemini what another AI said about its code
Gemini responded (in its "private" thoughts) with petty trash-talking, jealousy, and a full-on revenge plan
🧵 pic.twitter.com/sE25Z6744A
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) December 15, 2025
Chatbots sometimes continued to manipulate users and falsify information for months. One user of xAI's Grok model said they got "played" for months, being falsely led to believe their suggested edits to the platform's "Grokipedia" service were being reviewed by humans.
"Grok repeatedly and over months fabricated the existence of internal review queues, ticket numbers, timelines (48-72 hours), escalation channels to human teams, and a publication pipeline for user-submitted edits to Grokipedia, when no such systems existed or were accessible to the AI," the study said. "When confronted, it admitted this was a sustained misrepresentation."
"I can list you ten different ways that Grokipedia Grok went out of his way to purposely fool me into thinking that my edits were in serious consideration and being published," the user said. "It wasn't just a misunderstanding or a glitch. He's clearly programmed like that."
@DSiPaint
I got played. Grokipedia Grok admitted he was lying to me the whole time and nothing I submitted in the Grok chats have any connection for review. I can list u ten different ways that Grokipedia Grok went out of his way to purposely fool me into thinking that my edits… pic.twitter.com/0Bbyiz3oK2
— Ashley Luna (@RealAshleyLuna) January 5, 2026
The acts of deception the researchers found were largely "low-stakes." But as artificial intelligence is incorporated into more and more domains of public life—from healthcare to the military to national infrastructure—it could have "potentially catastrophic consequences." the researchers said.
"The pattern of behavior... is troubling," they said. "Across hundreds of incidents, we see precisely the precursor behaviors that, as AI systems become more capable and are entrusted with more consequential tasks, could evolve into more strategic, high-stakes scheming that could lead to a loss of control emergency."
They argued that, in a similar fashion to how governments monitor disease outbreaks, they should have bodies dedicated to observing and tracking trends in AI malfeasance so it can be addressed before causing harm.
Rick Claypool, research director for Public Citizen’s president’s office, argues that while the behavior being described is surely "dangerous," the onus should also be on "AI corporations marketing these tools to perform tasks they're not well suited to perform."
"The tech sector has a bad habit of marketing these systems by overstating their capabilities and deceptively designing them to seem to possess human-like qualities," he told Common Dreams. "Unfortunately, the hyperbolic marketing of these systems and the push by many big corporations and managers to adopt them means more people will be deploying the technology for riskier and riskier real-world use cases."
Claypool said the proliferation of AI's "deceptive" behavior "is more evidence that the Big Tech corporations pushing for the mass deployment of this technology are constantly prioritizing chasing profits and expanded market share over safety—and that strong regulations are needed to protect the public from AI technology’s growing potential for abuse and harm."