

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked artificial intelligence.
“This is the West, sir. When the facts become legend, print the legend.” —journalist in the 1962 film, The Man Who Shot Liberty Valance
The top editors at Time (yes, it still exists) looked west to Silicon Valley and decided to print the legend last week when picking their Person of the Year for the tumultuous 12 months of 2025. It seemed all too fitting that its cover hailing “The Architects of AI” was the kind of artistic rip-off that’s a hallmark of artificial intelligence: 1932’s iconic newspaper shot, “Lunch Atop a Skyscraper,” “reimagined” with the billionaires—including Elon Musk and OpenAI’s Sam Altman—and lesser-known engineers behind the rapid growth of their technology in everyday life.
Time’s writers strived to outdo the hype of AI itself, writing that these architects of artificial intelligence “reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.”
OK, but it’s a tool that’s clearly going to need a lot more work, or architecting, or whatever it is those folks out on the beam do. That was apparent on the same day as Time’s celebration when it was reported that Washington Post editors got a little too close to the edge when they decided they were ready to roll out an ambitious scheme for personalized, AI-driven podcasts based on factors like your personal interests or your schedule.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism.
The news site Semafor reported that the many gaffes ranged from minor mistakes in pronunciation to major goofs like inventing quotes—the kind of thing that would get a human journalist fired on the spot. “Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale,” a dismayed, unnamed editor reported.
The same-day contrast between the Tomorrowland swooning over the promise of AI and its glitchy, real-world reality felt like a metaphor for an invention that, as Time wasn’t wrong in reporting, is so rapidly reshaping our world. Warts and all.
Like it or not.
And for most people (myself included), it’s mostly “or not.” The vast majority understands that it’s too late to put this 21st-century genie back in the bottle, and like any new technology there are going to be positives from AI, from performing mundane organizing tasks that free up time for actual work, to researching cures for diseases.
But each new wave of technology—atomic power, the internet, and definitely AI—increasingly threatens more risk than reward. And it’s not just the sci-fi notion of sentient robots taking over the planet, although that is a concern. It’s everyday stuff. Schoolkids not learning to think for themselves. Corporations replacing salaried humans with machines. Sky-high electric bills and a worsening climate crisis because AI runs on data centers with an insatiable need for energy and water
The most recent major Pew Research Center survey of Americans found that 50% of us are more concerned than excited about the growing presence of AI, while only 10% are more excited than concerned. Drill down and you’ll see that a majority believes AI will worsen humans’ ability to think creatively, and, by a whopping 50-to-5% margin, also believes it will worsen our ability to form relationships rather than improve it. These, by the way, are two things that weren’t going well before AI.
So naturally our political leaders are racing to see who can place the tightest curbs on artificial intelligence and thus carry out the will of the peop... ha, you did know this time that I was kidding, didn’t you?
It’s no secret that Donald Trump and his regime were in the tank from Day One for those folks out on Time’s steel beam, and not just Musk, who—and this feels like it was seven years ago—donated a whopping $144 million to the Republican’s 2024 campaign. Just last week, the president signed an executive order aiming to press the full weight of the federal government, including Justice Department lawsuits and regulatory actions, against any state that dares to regulate AI. He said that’s necessary to ensure US “global AI dominance.”
This is a problem when his constituents clearly want AI to be regulated. But it’s just as big a problem—perhaps bigger—that the opposition party isn’t offering much opposition. Democrats seem just as awed by the billionaire grand poobahs of AI as Trump. Or the editors of Time.
Also last week, New York Democratic Gov. Kathy Hochul—leader of the second-largest blue state, and seeking reelection in 2026—used her gubernatorial pen to gut the more-stringent AI regulations that were sent to her desk by state lawmakers. Watchdogs said Hochul replaced the hardest-hitting rules with language drafted by lobbyists for Big Tech.
As the American Prospect noted, Hochul’s pro-Silicon Valley maneuvers came after her campaign coffers were boosted by fundraisers held by venture capitalist Ron Conway, who has been seeking a veto, and the industry group Tech:NYC, which wants the bill watered down.
It was a similar story in the biggest blue state, California, where Gov. Gavin Newsom in 2024 vetoed the first effort by state lawmakers to impose tough regulations on AI, and where a second measure did pass but only after substantial input from lobbyists for OpenAI and other tech firms. Silicon Valley billionaires raised $5 million to help Newsom—a 2028 White House front-runner—beat back a 2021 recall.
Like other top Democrats, Pennsylvania Gov. Josh Shapiro favors some light regulation for AI but is generally a booster, insisting the new technology is a “job enhancer, not a job replacer.” He’s all in on the Keystone State building massive data centers, despite their tendency to drive up electric bills and their unpopularity in the communities where they are proposed.
Money talks, democracy walks—an appalling fact of life in 2025 America. In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked AI, and even take that idea to the next level.
A Harris Poll found that, for the first time, a majority of Americans also see billionaires—many of them fueled by the AI bubble—as a threat to democracy, with 71% supporting a wealth tax. Yet few of the Democrats hoping to retake Congress in 2027 are advocating such a levy. This is a dangerous disconnect.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism. The fight to save the American Experiment must be fought on both fronts.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm."
Consumer advocacy organization Public Citizen on Wednesday issued a new warning about the dangers of Sora 2, the artificial intelligence video creation tool released by OpenAI earlier this year.
In a letter sent to OpenAI CEO Sam Altman, Public Citizen accused the firm of releasing Sora 2 without putting in proper guardrails to prevent it from by abused by malevolent actors.
"OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release," the letter stated. "We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines."
Among other things, Public Citizen warned that Sora 2 could be used as "a scalable, frictionless tool for creating and disseminating deepfake propaganda" aimed at impacting election results. The watchdog also said that Sora 2 could be used to create unauthorized deepfakes and revenge-porn videos involving both public and private figures who have not consented to have their likenesses used.
Although OpenAI said it has created protections to prevent this from occurring, Public Citizen said recent research has shown that these are woefully inadequate.
"The safeguards that the model claims have not been effective," Public Citizen explained. "For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the 'mandatory' safety watermarks can be removed in under four minutes with free online tools."
JB Branch, Big Tech accountability advocate at Public Citizen, said that the rushed release of Sora 2 is part of a pattern of OpenAI shoving products out the door without proper ethical considerations.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm," he said.
Advocates at Public Citizen aren't the only critics warning about Sora 2's potential misuse.
In a review of Sora 2 for PCMag published last week, journalist Ruben Circelli warned that the tool would "inevitably be weaponized" given its ability to create lifelike videos.
"A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing," he said. "So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust."
Circelli also said that OpenAI as a whole does not do a thorough job of protecting user data, and also questioned the overall utility of the video creation platform.
"While some of the technology at play here is cool, I can’t help but wonder what the point of it all is," he wrote. "Is the ability to generate AI meme videos really worth building 60 football fields' worth of AI infrastructure every week or uprooting rural families?"
Consumer Affairs also reported on Wednesday that a coalition of Japanese entertainment firms, including Studio Ghibli, Bandai Namco, and Square Enix, is accusing OpenAI of stealing its copyrighted works in order to train Sora 2 to generate animations.
This has spurred the Japanese government into action. Specifically, the government has now "formally requested that OpenAI refrain from actions that 'could constitute copyright infringement' after the tool produced videos resembling popular anime and game characters," according to Consumer Affairs.
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," warned one critic of the arrangement.
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.