

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"We can push OpenAI over the edge," said one group encouraging a boycott of the AI giant.
Calls to boycott OpenAI have been growing over the last several days after the artificial intelligence giant reached a deal with the US Department of Defense to use its ChatGPT chatbot across its classified network.
OpenAI CEO Sam Altman on Friday said that the deal reached with the Pentagon would have "prohibitions on domestic mass surveillance" and maintain "human responsibility for the use of force, including for autonomous weapon systems."
The DOD had previously gotten into a dispute with AI firm Anthropic, which refused to modify its Claude chatbot to allow for its use for domestic spying or to make final decisions on whether to take a human life.
President Donald Trump announced on Friday that he'd ordered the US military to stop using Anthropic's technology, describing the firm as "A RADICAL LEFT, WOKE COMPANY" in a Truth Social post.
Shortly after Trump's post, Altman announced that OpenAI had reached a deal with the Pentagon. This led many critics to suspect that, whatever Altman's denials, the DOD would be allowed to use ChatGPT in ways that it had been forbidden to utilize Claude.
Adam Cochrane, who runs activist venture capital firm Cinneamhain Ventures, said immediately after Altman's announcement that he was canceling his ChatGPT subscription on the grounds that "I don’t support bootlickers."
Dr. Simon Goddek, a biotech scientist, also revealed that he was canceling his ChatGPT subscription and encouraged others to do the same.
"Companies understand one language: MONEY," he wrote. "If they support wars of aggression, they can’t profit from me. I’m out. What’s stopping you?"
AI consultant Mark Gadala-Maria accused Altman of being two-faced with his DOD deal, noting that the OpenAI CEO had expressed solidarity with Anthropic in the face of the Trump administration's attacks.
"Just a few hours ago he was on TV saying he stood by Anthropic," wrote Gadala-Maria. "Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?"
An OpenAI boycott group called "QuitGPT" went online shortly after Altman's announcement, and the organization claims that it has gotten more than 1.5 million people to cancel their ChatGPT subscriptions or stop using it all together.
QuitGPT also outlined why its boycott of OpenAI could be potentially effective given the highly competitive nature of the current consumer AI market.
"ChatGPT is the biggest chatbot in the world, but that advantage is fragile," QuitGPT explained. "ChatGPT has been losing market share. Their creator, OpenAI, is losing three times more than they earn. ChatGPT users skew young and progressive, and many don't know about alternatives. We can push OpenAI over the edge."
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm."
Consumer advocacy organization Public Citizen on Wednesday issued a new warning about the dangers of Sora 2, the artificial intelligence video creation tool released by OpenAI earlier this year.
In a letter sent to OpenAI CEO Sam Altman, Public Citizen accused the firm of releasing Sora 2 without putting in proper guardrails to prevent it from by abused by malevolent actors.
"OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release," the letter stated. "We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines."
Among other things, Public Citizen warned that Sora 2 could be used as "a scalable, frictionless tool for creating and disseminating deepfake propaganda" aimed at impacting election results. The watchdog also said that Sora 2 could be used to create unauthorized deepfakes and revenge-porn videos involving both public and private figures who have not consented to have their likenesses used.
Although OpenAI said it has created protections to prevent this from occurring, Public Citizen said recent research has shown that these are woefully inadequate.
"The safeguards that the model claims have not been effective," Public Citizen explained. "For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the 'mandatory' safety watermarks can be removed in under four minutes with free online tools."
JB Branch, Big Tech accountability advocate at Public Citizen, said that the rushed release of Sora 2 is part of a pattern of OpenAI shoving products out the door without proper ethical considerations.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm," he said.
Advocates at Public Citizen aren't the only critics warning about Sora 2's potential misuse.
In a review of Sora 2 for PCMag published last week, journalist Ruben Circelli warned that the tool would "inevitably be weaponized" given its ability to create lifelike videos.
"A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing," he said. "So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust."
Circelli also said that OpenAI as a whole does not do a thorough job of protecting user data, and also questioned the overall utility of the video creation platform.
"While some of the technology at play here is cool, I can’t help but wonder what the point of it all is," he wrote. "Is the ability to generate AI meme videos really worth building 60 football fields' worth of AI infrastructure every week or uprooting rural families?"
Consumer Affairs also reported on Wednesday that a coalition of Japanese entertainment firms, including Studio Ghibli, Bandai Namco, and Square Enix, is accusing OpenAI of stealing its copyrighted works in order to train Sora 2 to generate animations.
This has spurred the Japanese government into action. Specifically, the government has now "formally requested that OpenAI refrain from actions that 'could constitute copyright infringement' after the tool produced videos resembling popular anime and game characters," according to Consumer Affairs.
"I wouldn't touch this stuff now," warned one financial analyst about the AI industry.
Several analysts are sounding alarms about the artificial intelligence industry being a major financial bubble that could potentially tip the global economy into a severe recession.
MarketWatch reported on Friday that the MacroStrategy Partnership, an independent research firm, has published a new note claiming that the bubble generated by AI is now 17 times larger than the dot-com bubble in the late 1990s, and four times bigger than the global real-estate bubble that crashed the economy in 2008.
The note was written by a team of analysts, including Julien Garran, who previously led the commodities strategy team at multinational investment bank UBS.
Garran contends that companies have vastly overhyped the capabilities of AI large language models (LLMs), and he pointed to data showing that the adoption rate of LLMs among large businesses has already started to decline. He also thinks that flagship LLM ChatGPT may have "hit a wall" with its latest release, which he said hasn't delivered noticeably better performance than previous releases, despite costing 10 times as much.
The consequences for the economy, he warns, could be dire.
"The danger is not only that this pushes us into a zone 4 deflationary bust on our investment clock, but that it also makes it hard for the Fed and the Trump administration to stimulate the economy out of it," he writes in the investment note.
Garran isn't the only analyst expressing extreme anxiety about the potential for an AI bubble to bring down the economy.
In a Friday interview with Axios, Dario Perkins, managing director of global macro at TS Lombard, said that tech companies are increasingly taking on massive debts in their race to build out AI data centers in a way that is reminiscent of the debts held by companies during the dot-com and subprime mortgage bubbles.
Perkins told Axios that he's particularly wary because the big tech companies are claiming "they don't care whether the investment has any return, because they're in a race."
"Surely that in itself is a red flag," he added.
CNBC reported on Friday that Goldman Sachs SEO David Solomon told an audience at the Italian Tech Week conference that he expected a "drawdown" in the stock market over the next year or two given that so much money has been pumped into AI ventures in such a short time.
"I think that there will be a lot of capital that’s deployed that will turn out to not deliver returns, and when that happens, people won’t feel good," he said.
Solomon wouldn't go so far as to definitively declare AI to be a bubble, but he did say some investors are "out on the risk curve because they’re excited," which is a telltale sign of a financial bubble.
According to CNBC, Amazon CEO Jeff Bezos, who was also attending Italian Tech Week, said on Friday that there was a bubble in the AI industry, although he insisted that the technology would be a major benefit for humanity.
"Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas," Bezos said of the AI industry. "And that’s also probably happening today."
Perkins made no predictions about when the AI bubble will pop, but he argued that it's definitely much closer to the end of the cycle than the beginning.
"I wouldn't touch this stuff now," he told Axios. "We're much closer to 2000 than 1995."