

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm."
Consumer advocacy organization Public Citizen on Wednesday issued a new warning about the dangers of Sora 2, the artificial intelligence video creation tool released by OpenAI earlier this year.
In a letter sent to OpenAI CEO Sam Altman, Public Citizen accused the firm of releasing Sora 2 without putting in proper guardrails to prevent it from by abused by malevolent actors.
"OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release," the letter stated. "We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines."
Among other things, Public Citizen warned that Sora 2 could be used as "a scalable, frictionless tool for creating and disseminating deepfake propaganda" aimed at impacting election results. The watchdog also said that Sora 2 could be used to create unauthorized deepfakes and revenge-porn videos involving both public and private figures who have not consented to have their likenesses used.
Although OpenAI said it has created protections to prevent this from occurring, Public Citizen said recent research has shown that these are woefully inadequate.
"The safeguards that the model claims have not been effective," Public Citizen explained. "For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the 'mandatory' safety watermarks can be removed in under four minutes with free online tools."
JB Branch, Big Tech accountability advocate at Public Citizen, said that the rushed release of Sora 2 is part of a pattern of OpenAI shoving products out the door without proper ethical considerations.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm," he said.
Advocates at Public Citizen aren't the only critics warning about Sora 2's potential misuse.
In a review of Sora 2 for PCMag published last week, journalist Ruben Circelli warned that the tool would "inevitably be weaponized" given its ability to create lifelike videos.
"A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing," he said. "So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust."
Circelli also said that OpenAI as a whole does not do a thorough job of protecting user data, and also questioned the overall utility of the video creation platform.
"While some of the technology at play here is cool, I can’t help but wonder what the point of it all is," he wrote. "Is the ability to generate AI meme videos really worth building 60 football fields' worth of AI infrastructure every week or uprooting rural families?"
Consumer Affairs also reported on Wednesday that a coalition of Japanese entertainment firms, including Studio Ghibli, Bandai Namco, and Square Enix, is accusing OpenAI of stealing its copyrighted works in order to train Sora 2 to generate animations.
This has spurred the Japanese government into action. Specifically, the government has now "formally requested that OpenAI refrain from actions that 'could constitute copyright infringement' after the tool produced videos resembling popular anime and game characters," according to Consumer Affairs.
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," warned one critic of the arrangement.
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.
The AI regulatory moratorium threatens to obliterate America’s frayed social contract.
I grew up under Enver Hoxha’s totalitarian regime in Albania, where paranoia reigned supreme, propaganda was relentless, dissent was crushed, and concrete bunkers dotted the landscape. Now, as I witness the United States marching toward authoritarianism, I am struck by the haunting echoes of my past. The effort to reshape society through fear, intimidation, and division; the attack on independent institutions; the surveillance state; and the apocalyptic fever remind me so of the dynamics that once suffocated Albania. Beneath it all simmers a pervasive social malaise and a sense of moral decay.
Today’s crisis is not accidental. It’s a long time in the making and the result of powerful interests—Silicon Valley billionaires, MAGA ideologues, Christian nationalists, and Project 2025 architects—who have set aside their differences and coalesced to accelerate collapse, fuel division, and destroy democracy.
A chief goal of this agenda is the race to build and deregulate artificial intelligence (AI). Since OpenAI launched ChatGPT, we’ve been subjected to the largest tech experiment in history. AI evangelists promise miracles—curing intractable diseases, solving climate crisis, even eternal life—while ignoring its insatiable appetite for water and energy, much of it still sourced from fossil fuels. Revealingly, some billionaires who once called for AI regulation now fund efforts to ban states from regulating AI for the next decade.
Tucked in over 1,000 pages of the recent Republican reconciliation bill is a sweeping moratorium which would ban states and municipalities from regulating AI for 10 years. The same bill slashes hundreds of billions from Medicaid, Medicare, and food aid—an unprecedented transfer of wealth upward that will gravely harm both the most vulnerable and the working class—while pouring over a billion dollars into AI development at the Departments of Defense and Commerce.
The real risk is not that the U.S. will lose to China by regulating AI, but that it will lose the trust of its own people and the world by failing to do so.
The impact would be immediate and profound. It would preempt existing state AI laws in California, Colorado, New York, Illinois, and Utah, and block pending state bills aimed at ensuring transparency, preventing discrimination, and protecting individuals and communities from harm. The broad definition of “automated decision systems” would undermine oversight in healthcare, finance, education, consumer protection, housing, employment, civil rights, and even election integrity. In effect, it would rewrite the social contract, stripping states of the power to protect their residents.
Make no mistake—this isn’t an isolated effort. It’s what Naomi Klein and Astra Taylor call “the rise of end times fascism”—an apocalyptic project of convergent factions to accelerate societal collapse and redraw sovereignty for profit. Particularly, the Silicon Valley contingent merits closer scrutiny. Its ultra-libertarian and neo-reactionary wing, including Peter Thiel and Marc Andreessen has abandoned faith in democracy and invested in Pronomos Capital—a venture capital fund backing “network states” that can best be described as digital fiefdoms run by corporate monarchs. Existing enclaves include Próspera in Honduras and Itana in Nigeria where the wealthy bypass local regulation and often displace communities. Now, billionaires lobby for “Freedom Cities” within the U.S.—autonomous zones exempt from state and federal law, potentially enabling unregulated genetic experimentation and other risky activities.
Animating this project is a bundle of techno-utopian ideologies permeating Silicon Valley’s zeitgeist—most prominently, longtermism and transhumanism. Longtermists believe our duty is to maximize the well-being of hypothetical future humans, even at today’s expense. These worldviews envision replacing humanity with AI or digital posthuman species as inevitable, even desirable. Elon Musk and OpenAI’s Sam Altman, who publicly warn of AI extinction, stand to benefit by positioning their products as humanity’s salvation. As philosopher Émile P. Torres warns, these ideologies spring from the same poisoned well as eugenics and provide cover for dismantling democratic safeguards and social protections in pursuit of a pro-extinctionist future.
Musk’s Department of Government Efficiency (DOGE) exemplifies the risks. Operating as an unelected, extra legal entity, it has employed AI-driven systems to automate mass firings of federal employees, and deployed Musk’s X AI Grok chatbot to analyze sensitive government data, potentially turning millions of Americans’ personal information into training fodder for the model. Reports indicate DOGE is building a data panopticon pooling the personal information of millions of Americans to surveil immigrants and to aid the Department of Justice in investigating spurious claims of widespread voter fraud.
The perils of unregulated AI are not theoretical. Like any powerful technology, AI has enormous potential for both benefit and harm, depending on how it is developed, deployed, and regulated. Embedded within AI systems are the biases and assumptions of the training data and algorithmic choices, which—if left unchecked—can perpetuate and amplify existing social disparities at scale. AI is not merely a technical tool. Rather, it is part of a larger sociotechnical system, deeply intertwined with human institutions, infrastructure, laws, and social norms.
The states must “flip the script,” drawing on the strength of our democratic tradition and shared humanity, to build a future where people and not the “end times fascism” forces can flourish.
Documented AI harms include wrongful denial of health services; discrimination in housing, hiring, and lending; and the spread of misinformation and deepfakes, among others. Where Congress has failed to act, states have stepped in to fill the regulatory void. If they are now prevented from addressing these harms, without a federal framework to take their place, the consequences will likely be severe. Not only will known harms worsen, but new risks will emerge, including the specter of mass unemployment. Some tech CEOs, anxious on making good on their massive AI investments, boast about automating away people’s jobs and another warns of mass job losses, regardless of whether AI is up to the job.
Supporters of the moratorium claim that state-level regulation impedes America’s ability to compete with China. But flooding the market with unregulated, potentially harmful AI risks eroding public trust and creating instability. Contrary to the perennial argument propounded by Big Tech, targeted regulation does not slow innovation. Rather, it creates the stability, predictability, and safety that allow American companies to thrive and lead globally. The real risk is not that the U.S. will lose to China by regulating AI, but that it will lose the trust of its own people and the world by failing to do so.
The American public is not fooled. Polls show overwhelming bipartisan support for strong AI oversight. State attorneys general and civil society groups have also opposed the moratorium. In the Senate, the provision may face challenges under the Byrd Rule, which prohibits including provisions in budget reconciliation bills that are “extraneous” to fiscal policy. If enacted, the moratorium would likely be challenged as unconstitutional under the 10th Amendment, which reserves to the states all powers not specifically delegated to the federal government. Regardless of its fate, the intent of its supporters is clear: to harness AI without guardrails, in pursuit of a monarchical dystopian agenda.
Americans do not aspire to a future of despotic power and unaccountable surveillance—akin to the unfreedom I experienced in communist Albania. We know where that road leads: oppression, corruption, mass brainwashing, and eventually the breakdown of social order. But America’s story isn’t written by those who surrender to fear, fatalism, or nihilism. As James Baldwin said, “Not everything that is faced can be changed, but nothing can be changed until it is faced.” Now is the time to face this challenge together. The states must “flip the script,” drawing on the strength of our democratic tradition and shared humanity, to build a future where people and not the “end times fascism” forces can flourish. Let us answer this moment not with resignation, but with courage and resolve, and ensure that a “government of the people, by the people, for the people, shall not perish from the Earth.”