

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A 20-year-old suspect was found at the company's headquarters, where he was threatening to burn down the building.
A suspect was arrested in San Francisco Friday after being accused of throwing a Molotov cocktail at the home of Sam Altman, the CEO of the artificial intelligence firm OpenAI.
The 20-year-old man was found at the OpenAI headquarters about three miles away from Altman's home, where he was threatening to burn down the building, San Francisco police said.
The device the suspect threw onto Altman's property in the Russian Hill neighborhood caused a fire on the exterior gate. It was unclear whether Altman and his family were at home.
The suspect was in custody Friday, with charges pending.
Altman's company and other companies have been under fire as AI has expanded rapidly at President Donald Trump's urging, with the president issuing an executive order attacking states' ability to regulate the industry.
Experts have warned the expansion of generative AI threatens jobs and democracy, with political campaigns already using the technology to create fraudulent media in advertisements.
Massive, energy-sucking AI data centers have also been blamed for higher household electricity bills and water consumption.
Protesters have rallied against Altman's company for agreeing to provide its technology to the Department of Defense.
In November, The New York Times reported, a person who had once been associated with the anti-AI group Stop AI "expressed interest in causing physical harm to OpenAI employees," causing the company to lock down its headquarters.
On Friday, Stop AI condemned the attack on Altman's house and emphasized that the group "seeks to protect human life."
"We do not condone any violence whatsoever," said the group. "We pray everyone involved in this situation puts aside violence and finds peace, and we continue to hope the AI industry stops the development of frontier AI systems in the interest of public safety and the preservation of humanity. To the best of our knowledge, this incident did not involve anyone who has ever been associated with our group. And this action is wholly inconsistent with our values."
"We can push OpenAI over the edge," said one group encouraging a boycott of the AI giant.
Calls to boycott OpenAI have been growing over the last several days after the artificial intelligence giant reached a deal with the US Department of Defense to use its ChatGPT chatbot across its classified network.
OpenAI CEO Sam Altman on Friday said that the deal reached with the Pentagon would have "prohibitions on domestic mass surveillance" and maintain "human responsibility for the use of force, including for autonomous weapon systems."
The DOD had previously gotten into a dispute with AI firm Anthropic, which refused to modify its Claude chatbot to allow for its use for domestic spying or to make final decisions on whether to take a human life.
President Donald Trump announced on Friday that he'd ordered the US military to stop using Anthropic's technology, describing the firm as "A RADICAL LEFT, WOKE COMPANY" in a Truth Social post.
Shortly after Trump's post, Altman announced that OpenAI had reached a deal with the Pentagon. This led many critics to suspect that, whatever Altman's denials, the DOD would be allowed to use ChatGPT in ways that it had been forbidden to utilize Claude.
Adam Cochrane, who runs activist venture capital firm Cinneamhain Ventures, said immediately after Altman's announcement that he was canceling his ChatGPT subscription on the grounds that "I don’t support bootlickers."
Dr. Simon Goddek, a biotech scientist, also revealed that he was canceling his ChatGPT subscription and encouraged others to do the same.
"Companies understand one language: MONEY," he wrote. "If they support wars of aggression, they can’t profit from me. I’m out. What’s stopping you?"
AI consultant Mark Gadala-Maria accused Altman of being two-faced with his DOD deal, noting that the OpenAI CEO had expressed solidarity with Anthropic in the face of the Trump administration's attacks.
"Just a few hours ago he was on TV saying he stood by Anthropic," wrote Gadala-Maria. "Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?"
An OpenAI boycott group called "QuitGPT" went online shortly after Altman's announcement, and the organization claims that it has gotten more than 1.5 million people to cancel their ChatGPT subscriptions or stop using it all together.
QuitGPT also outlined why its boycott of OpenAI could be potentially effective given the highly competitive nature of the current consumer AI market.
"ChatGPT is the biggest chatbot in the world, but that advantage is fragile," QuitGPT explained. "ChatGPT has been losing market share. Their creator, OpenAI, is losing three times more than they earn. ChatGPT users skew young and progressive, and many don't know about alternatives. We can push OpenAI over the edge."
In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked artificial intelligence.
“This is the West, sir. When the facts become legend, print the legend.” —journalist in the 1962 film, The Man Who Shot Liberty Valance
The top editors at Time (yes, it still exists) looked west to Silicon Valley and decided to print the legend last week when picking their Person of the Year for the tumultuous 12 months of 2025. It seemed all too fitting that its cover hailing “The Architects of AI” was the kind of artistic rip-off that’s a hallmark of artificial intelligence: 1932’s iconic newspaper shot, “Lunch Atop a Skyscraper,” “reimagined” with the billionaires—including Elon Musk and OpenAI’s Sam Altman—and lesser-known engineers behind the rapid growth of their technology in everyday life.
Time’s writers strived to outdo the hype of AI itself, writing that these architects of artificial intelligence “reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.”
OK, but it’s a tool that’s clearly going to need a lot more work, or architecting, or whatever it is those folks out on the beam do. That was apparent on the same day as Time’s celebration when it was reported that Washington Post editors got a little too close to the edge when they decided they were ready to roll out an ambitious scheme for personalized, AI-driven podcasts based on factors like your personal interests or your schedule.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism.
The news site Semafor reported that the many gaffes ranged from minor mistakes in pronunciation to major goofs like inventing quotes—the kind of thing that would get a human journalist fired on the spot. “Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale,” a dismayed, unnamed editor reported.
The same-day contrast between the Tomorrowland swooning over the promise of AI and its glitchy, real-world reality felt like a metaphor for an invention that, as Time wasn’t wrong in reporting, is so rapidly reshaping our world. Warts and all.
Like it or not.
And for most people (myself included), it’s mostly “or not.” The vast majority understands that it’s too late to put this 21st-century genie back in the bottle, and like any new technology there are going to be positives from AI, from performing mundane organizing tasks that free up time for actual work, to researching cures for diseases.
But each new wave of technology—atomic power, the internet, and definitely AI—increasingly threatens more risk than reward. And it’s not just the sci-fi notion of sentient robots taking over the planet, although that is a concern. It’s everyday stuff. Schoolkids not learning to think for themselves. Corporations replacing salaried humans with machines. Sky-high electric bills and a worsening climate crisis because AI runs on data centers with an insatiable need for energy and water
The most recent major Pew Research Center survey of Americans found that 50% of us are more concerned than excited about the growing presence of AI, while only 10% are more excited than concerned. Drill down and you’ll see that a majority believes AI will worsen humans’ ability to think creatively, and, by a whopping 50-to-5% margin, also believes it will worsen our ability to form relationships rather than improve it. These, by the way, are two things that weren’t going well before AI.
So naturally our political leaders are racing to see who can place the tightest curbs on artificial intelligence and thus carry out the will of the peop... ha, you did know this time that I was kidding, didn’t you?
It’s no secret that Donald Trump and his regime were in the tank from Day One for those folks out on Time’s steel beam, and not just Musk, who—and this feels like it was seven years ago—donated a whopping $144 million to the Republican’s 2024 campaign. Just last week, the president signed an executive order aiming to press the full weight of the federal government, including Justice Department lawsuits and regulatory actions, against any state that dares to regulate AI. He said that’s necessary to ensure US “global AI dominance.”
This is a problem when his constituents clearly want AI to be regulated. But it’s just as big a problem—perhaps bigger—that the opposition party isn’t offering much opposition. Democrats seem just as awed by the billionaire grand poobahs of AI as Trump. Or the editors of Time.
Also last week, New York Democratic Gov. Kathy Hochul—leader of the second-largest blue state, and seeking reelection in 2026—used her gubernatorial pen to gut the more-stringent AI regulations that were sent to her desk by state lawmakers. Watchdogs said Hochul replaced the hardest-hitting rules with language drafted by lobbyists for Big Tech.
As the American Prospect noted, Hochul’s pro-Silicon Valley maneuvers came after her campaign coffers were boosted by fundraisers held by venture capitalist Ron Conway, who has been seeking a veto, and the industry group Tech:NYC, which wants the bill watered down.
It was a similar story in the biggest blue state, California, where Gov. Gavin Newsom in 2024 vetoed the first effort by state lawmakers to impose tough regulations on AI, and where a second measure did pass but only after substantial input from lobbyists for OpenAI and other tech firms. Silicon Valley billionaires raised $5 million to help Newsom—a 2028 White House front-runner—beat back a 2021 recall.
Like other top Democrats, Pennsylvania Gov. Josh Shapiro favors some light regulation for AI but is generally a booster, insisting the new technology is a “job enhancer, not a job replacer.” He’s all in on the Keystone State building massive data centers, despite their tendency to drive up electric bills and their unpopularity in the communities where they are proposed.
Money talks, democracy walks—an appalling fact of life in 2025 America. In a functioning democracy, we would have at least one political party that would fly the banner of the 53% of us who are wary of unchecked AI, and even take that idea to the next level.
A Harris Poll found that, for the first time, a majority of Americans also see billionaires—many of them fueled by the AI bubble—as a threat to democracy, with 71% supporting a wealth tax. Yet few of the Democrats hoping to retake Congress in 2027 are advocating such a levy. This is a dangerous disconnect.
Time magazine got one thing right. Just as its editors understood in 1938 that Adolf Hitler was its Man of the Year because he’d influenced the world more than anyone else, albeit for evil, history will likely look back at 2025 and agree that AI posed an even bigger threat to humanity than Trump’s brand of fascism. The fight to save the American Experiment must be fought on both fronts.