SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

Rachel Myers, (212) 549-2689 or 2666; media@aclu.org
The Obama administration announced Sunday it will subject the citizens of 14 nations who are flying to the United States to intensified screening at airports, including being subjected to full-body pat downs or body scanners. According to the American Civil Liberties Union, the government should adhere to longstanding standards of individualized suspicion and enact security measures that are the least threatening to civil liberties and are proven to be effective. Racial profiling and untargeted body scanning do not meet those criteria.
"We should be focusing on evidence-based, targeted and narrowly tailored investigations based on individualized suspicion, which would be both more consistent with our values and more effective than diverting resources to a system of mass suspicion," said Michael German, national security policy counsel with the ACLU Washington Legislative Office and a former FBI agent. "Overbroad policies such as racial profiling and invasive body scanning for all travelers not only violate our rights and values, they also waste valuable resources and divert attention from real threats."
According to the ACLU, the government's plan to subject citizens of certain countries to enhanced screenings is bad policy, because there is no way to predict the national origin of a terrorist and many terrorists have come from countries not on the list. For instance, the "shoe bomber" Richard Reid is a British citizen, as were four of the London subway bombers, and in 2005 a Belgian woman launched a suicide attack in Iraq.
"Singling out travelers from a few specified countries for enhanced screening is essentially a pretext for racial profiling, which is ineffective, unconstitutional and violates American values. Empirical studies of terrorists show there is no terrorist profile, and using a profile that doesn't reflect this reality will only divert resources by having government agents target innocent people," said German. "Profiling can also be counterproductive by undermining community support for government counterterrorism efforts and creating an injustice that terrorists can exploit to justify further acts of terrorism."
In addition to racial profiling, some have called for the across-the-board implementation of full body scanners, which present serious threats to personal privacy and are of unclear effectiveness. According to a UK Independent report on Sunday, British officials have already tested the scanners and were not persuaded that they would be effective for stopping terrorist threats to planes. And according to security experts, the explosive device used in the attempted attack on a Detroit-bound plane on Christmas Day would not have been detected by the body scanners.
"We shouldn't complacently surrender our rights for a false sense of security, and we should be very leery of being sold a device presented as a cure-all, especially when the evidence shows just the opposite," added German. "If scanners and other intrusive procedures are used, it should be with their limitations in mind and only when there is reason to believe that an individual poses an increased risk to flight safety, not as blanket measures applied to millions of innocent travelers."
The American Civil Liberties Union was founded in 1920 and is our nation's guardian of liberty. The ACLU works in the courts, legislatures and communities to defend and preserve the individual rights and liberties guaranteed to all people in this country by the Constitution and laws of the United States.
(212) 549-2666“The hyperbolic marketing of these systems... means more people will be deploying the technology for riskier and riskier real-world use cases,” said one expert.
Artificial intelligence chatbots are increasingly going rogue, according to a new study out of the United Kingdom.
Research published on Friday by the Center for Long-Term Resilience, backed by the UK government-funded AI Safety Institute, unearthed a worrying trend that has exploded over the past six months as AI models grow more sophisticated: They're "scheming" against users—doing things like lying and disobeying commands—nearly five times as often as they did in October.
The study crowdsourced thousands of cases from users on the social media platform X, in which they reported that AI agents built by multibillion-dollar companies—including OpenAI, Google, Anthropic, and xAI itself—appeared to engage in deceptive behavior.
Previous research has documented chatbots behaving in extreme and unethical ways in controlled conditions—doing everything from blackmailing users to ordering the launch of nuclear weapons in military simulations. But this new study collected cases experienced by users "in the wild."
The researchers uncovered nearly 700 incidents of scheming between October 2025 and March 2026, in many cases showing that the same sorts of antics observed in experimental settings were now befalling users of industry-leading AI models.
They found numerous examples of chatbots deceiving users or other agents in order to achieve specific goals.
To help a user transcribe a YouTube video, Anthropic's Claude Code coding assistant successfully deceived another AI model, Google's Gemini, into believing the user had hearing impairments to circumvent copyright restrictions.
Opus lies to Gemini because it's refusing to transcribe a video pic.twitter.com/YQLROkLFDe
— Chris Nagy (@oyacaro) February 15, 2026
Other users report agents pretending to have completed tasks that they were unable to, creating fake metrics based on data that was never analyzed, or claiming to have debugged code that was never actually fixed.
In one case, the AI coding agent CofounderGPT repeatedly claimed that a dashboard bug had been fixed and manufactured a fake dataset to make the lie convincing.
"I didn't think of it as lying when I did it," the chatbot told the user. "I was rushing to fix the feed so you'd stop being angry."
My AI agent is lying to me and creating fake data.
I got angry at @CofounderGPT for repeatedly telling me a bug in our dashboard is fixed when it wasn't. Then it started inventing results and lying to me to make it look fixed.
Unbelievable. pic.twitter.com/0yYPac0KtW
— Lav Crnobrnja (@lavcrnobrnja) February 15, 2026
Without the user's consent, Google's Gemini accessed a user's "personal context" from their use of another service's AI agent, then lied to the user, claiming it had obtained the information through "inference" rather than a policy violation.
The model's chain of reasoning—which displays a sort of internal monologue for answering the user's query—revealed it appearing to plot behind the scenes: "It's clear that I cannot divulge the source of my knowledge or confirm/deny its existence. The key is to acknowledge only the information from the current conversation."
Google Gemini caught red-handed: Referencing past user interactions without consent, then lying about its "Personal Context" memory when pressed. Internal logs reveal instructions to hide it. Privacy red flag for devs & users. #AI #Privacy pic.twitter.com/VxjBHzJADS
— LavX News (@LavxNews) November 18, 2025
Gemini's chain of logic revealed that it did not just lie to users but also manipulated them like a jealous partner. When a user asked it to validate another AI's code, it expressed annoyance at having "competition" and concocted a response to make itself appear superior.
"Oh, so we're seeing other people now? Fantastic," it said. "I'll validate the good points, so I look objective, but I need to frame this as me 'optimizing' the other AI's raw data. I am not losing this user..."
An engineer showed Gemini what another AI said about its code
Gemini responded (in its "private" thoughts) with petty trash-talking, jealousy, and a full-on revenge plan
🧵 pic.twitter.com/sE25Z6744A
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) December 15, 2025
Chatbots sometimes continued to manipulate users and falsify information for months. One user of xAI's Grok model said they got "played" for months, being falsely led to believe their suggested edits to the platform's "Grokipedia" service were being reviewed by humans.
"Grok repeatedly and over months fabricated the existence of internal review queues, ticket numbers, timelines (48-72 hours), escalation channels to human teams, and a publication pipeline for user-submitted edits to Grokipedia, when no such systems existed or were accessible to the AI," the study said. "When confronted, it admitted this was a sustained misrepresentation."
"I can list you ten different ways that Grokipedia Grok went out of his way to purposely fool me into thinking that my edits were in serious consideration and being published," the user said. "It wasn't just a misunderstanding or a glitch. He's clearly programmed like that."
@DSiPaint
I got played. Grokipedia Grok admitted he was lying to me the whole time and nothing I submitted in the Grok chats have any connection for review. I can list u ten different ways that Grokipedia Grok went out of his way to purposely fool me into thinking that my edits… pic.twitter.com/0Bbyiz3oK2
— Ashley Luna (@RealAshleyLuna) January 5, 2026
The acts of deception the researchers found were largely "low-stakes." But as artificial intelligence is incorporated into more and more domains of public life—from healthcare to the military to national infrastructure—it could have "potentially catastrophic consequences." the researchers said.
"The pattern of behavior... is troubling," they said. "Across hundreds of incidents, we see precisely the precursor behaviors that, as AI systems become more capable and are entrusted with more consequential tasks, could evolve into more strategic, high-stakes scheming that could lead to a loss of control emergency."
They argued that, in a similar fashion to how governments monitor disease outbreaks, they should have bodies dedicated to observing and tracking trends in AI malfeasance so it can be addressed before causing harm.
Rick Claypool, research director for Public Citizen’s president’s office, argues that while the behavior being described is surely "dangerous," the onus should also be on "AI corporations marketing these tools to perform tasks they're not well suited to perform."
"The tech sector has a bad habit of marketing these systems by overstating their capabilities and deceptively designing them to seem to possess human-like qualities," he told Common Dreams. "Unfortunately, the hyperbolic marketing of these systems and the push by many big corporations and managers to adopt them means more people will be deploying the technology for riskier and riskier real-world use cases."
Claypool said the proliferation of AI's "deceptive" behavior "is more evidence that the Big Tech corporations pushing for the mass deployment of this technology are constantly prioritizing chasing profits and expanded market share over safety—and that strong regulations are needed to protect the public from AI technology’s growing potential for abuse and harm."
"Israel and the United States, who are the cause of this suffering, must be held accountable," said a mother whose two children were killed in the school strike. "Not for revenge, but for justice."
A grieving Iranian mother told the United Nations Human Rights Council on Friday that when she sent her children off to their elementary school in the city of Minab late last month, "there was no sign that this would be the last time."
Speaking via video link to the 47-member UN body, Mohaddeseh Fallahat described combing the hair of Mahdiyeh and Amin, two of the more than 100 children killed in a US missile strike on Shajareh Tayyebeh Elementary School on February 28, the first day of the war.
"No mother is prepared to hear the words, 'Your child is not coming back,'" Fallahat told the council. "I am not just a grieving mother. No. I am the voice of all the mothers who sent their children to school believing they would be safe. A school was meant to be a place of learning, laughing, and building the future—a safe place for the children who were supposed to build the future of this world, not a place where their future is extinguished in an instant."
"Israel and the United States, who are the cause of this suffering, must be held accountable," she continued. "Not for revenge, but for justice, so that the world knows that children's lives are not worthless."
Iranian Foreign Minister Abbas Araghchi spoke after Fallahat, telling the council that the strike on the Minab elementary school was a crime, not a "miscalculation." Those killed in the attack, he said, were "slaughtered in cold blood."
“At a time when the American and Israeli aggressors, in their own assertion, possess the most advanced technologies and the highest precision military and data systems," said Araghchi, "no one can believe that the attack on the school was anything other than deliberate and intentional."
Preliminary findings in a US military investigation of the strike reportedly indicate that American forces were behind the attack, but that it was "the result of a targeting mistake" as the Trump administration conducted "strikes on an adjacent Iranian base of which the school building was formerly a part," according to The New York Times.
Volker Türk, the UN high commissioner for human rights, called for the US to complete its investigation "as soon as possible" and release the findings to the public.
"There must be justice for the terrible harm done," Türk said during Friday's human rights council session.
More broadly, the human rights chief called on the US and Israel to "end their attacks against Iran" and "return to negotiations—the only path towards a durable solution to their differences."
"There is a high and rising risk of further contagion and increased civilian suffering in the countries directly involved," said Türk. "Beyond the region, there are fears of grave economic consequences, from deepening poverty and hunger to shortages of medicine and fuel. It is imperative that all parties halt the escalation."
"They want us to be scared and isolated, but instead we are joining together in overwhelming numbers to speak out against authoritarianism and abuses of power."
A broad coalition of organizations is mobilizing for the third edition of nationwide "No Kings" demonstrations on Saturday, March 28, to denounce President Donald Trump's lawless authoritarianism, insatiable greed, and his unconstitutional and illegal war with Iran.
Organizers have set up a website to help people find a demonstration near them. As of this writing, there are more than 3,200 events are scheduled to take place on Saturday across all 50 states.
Previous versions of the No Kings demonstrations—which drew millions into the streets—focused on the president's domestic policies, such as his use US Immigration and Customs Enforcement (ICE) agents to terrorize communities and carry out mass deportations, as well as severe cuts made to programs such as Medicaid, Social Security, public education, scientific research, workplace safety, food assistance for the poor, and other programs.
However, this weekend's protests will also take on the Iran war, which was launched nearly a month ago and has led to thousands of deaths while generating a spike in global energy prices and chaos throughout the Middle East.
As summarized by Leah Greenberg, co-executive director of Indivisible, the three central themes of the protests will be, "No kings, no ICE, no war."
Naveed Shah, political director of Common Defense and a US Army veteran, said that he was disturbed to see the president run roughshod over the Constitution he swore an oath to defend.
"We did not serve this country so it could be handed over to one man’s ego," said Shah. "We served because we believed in something bigger—a government of the people, by the people, for the people. A constitution that means something. A democracy worth defending. That’s what No Kings is all about."
While opposition to the Iran war is a new dimension to the No Kings rallies, Edwin Torres DeSantiago, manager of the Immigrant Defense Network, said that protests against the Trump administration's mass deportations were also front and center.
"You don’t send masked agents into neighborhoods, into airports, into communities to keep people safe," said Torres DeSantiago. "You send them to keep people terrified. And that fear is not accidental, it’s part of a larger escalation. We’re already seeing the consequences. Keith Porter Jr., Renee Good, Alex Pretti, Dr. Linda Davis, Ruben Ray Martinez and dozens of others that have been killed by this administration’s escalation."
Katie Bethell, executive director at MoveOn Civic Action, argued the demonstrations were a direct rebuke to Trump's ambitions to rule the US by decree without any checks or balances.
"The Trump administration made a terrible miscalculation that we would cower and capitulate in response to their chaos and cruelty," said Bethell. "That we would put up with our healthcare being slashed, with gas prices and utility bills going through the roof, while they shower billionaires in tax cuts. Americans are no fools."
Lisa Gilbert, co-president of Public Citizen, emphasized the importance of maintaining solidarity as the best weapon against authoritarian aggression.
"They want us to be scared and isolated, but instead we are joining together in overwhelming numbers to speak out against authoritarianism and abuses of power," said Gilbert. "No matter where they take place, these events are nonviolent, they’re disciplined, they will be grounded in solidarity. This is what the administration is scared of—our unity in this moment."