

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm."
Consumer advocacy organization Public Citizen on Wednesday issued a new warning about the dangers of Sora 2, the artificial intelligence video creation tool released by OpenAI earlier this year.
In a letter sent to OpenAI CEO Sam Altman, Public Citizen accused the firm of releasing Sora 2 without putting in proper guardrails to prevent it from by abused by malevolent actors.
"OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release," the letter stated. "We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines."
Among other things, Public Citizen warned that Sora 2 could be used as "a scalable, frictionless tool for creating and disseminating deepfake propaganda" aimed at impacting election results. The watchdog also said that Sora 2 could be used to create unauthorized deepfakes and revenge-porn videos involving both public and private figures who have not consented to have their likenesses used.
Although OpenAI said it has created protections to prevent this from occurring, Public Citizen said recent research has shown that these are woefully inadequate.
"The safeguards that the model claims have not been effective," Public Citizen explained. "For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the 'mandatory' safety watermarks can be removed in under four minutes with free online tools."
JB Branch, Big Tech accountability advocate at Public Citizen, said that the rushed release of Sora 2 is part of a pattern of OpenAI shoving products out the door without proper ethical considerations.
"The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm," he said.
Advocates at Public Citizen aren't the only critics warning about Sora 2's potential misuse.
In a review of Sora 2 for PCMag published last week, journalist Ruben Circelli warned that the tool would "inevitably be weaponized" given its ability to create lifelike videos.
"A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing," he said. "So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust."
Circelli also said that OpenAI as a whole does not do a thorough job of protecting user data, and also questioned the overall utility of the video creation platform.
"While some of the technology at play here is cool, I can’t help but wonder what the point of it all is," he wrote. "Is the ability to generate AI meme videos really worth building 60 football fields' worth of AI infrastructure every week or uprooting rural families?"
Consumer Affairs also reported on Wednesday that a coalition of Japanese entertainment firms, including Studio Ghibli, Bandai Namco, and Square Enix, is accusing OpenAI of stealing its copyrighted works in order to train Sora 2 to generate animations.
This has spurred the Japanese government into action. Specifically, the government has now "formally requested that OpenAI refrain from actions that 'could constitute copyright infringement' after the tool produced videos resembling popular anime and game characters," according to Consumer Affairs.
"I wouldn't touch this stuff now," warned one financial analyst about the AI industry.
Several analysts are sounding alarms about the artificial intelligence industry being a major financial bubble that could potentially tip the global economy into a severe recession.
MarketWatch reported on Friday that the MacroStrategy Partnership, an independent research firm, has published a new note claiming that the bubble generated by AI is now 17 times larger than the dot-com bubble in the late 1990s, and four times bigger than the global real-estate bubble that crashed the economy in 2008.
The note was written by a team of analysts, including Julien Garran, who previously led the commodities strategy team at multinational investment bank UBS.
Garran contends that companies have vastly overhyped the capabilities of AI large language models (LLMs), and he pointed to data showing that the adoption rate of LLMs among large businesses has already started to decline. He also thinks that flagship LLM ChatGPT may have "hit a wall" with its latest release, which he said hasn't delivered noticeably better performance than previous releases, despite costing 10 times as much.
The consequences for the economy, he warns, could be dire.
"The danger is not only that this pushes us into a zone 4 deflationary bust on our investment clock, but that it also makes it hard for the Fed and the Trump administration to stimulate the economy out of it," he writes in the investment note.
Garran isn't the only analyst expressing extreme anxiety about the potential for an AI bubble to bring down the economy.
In a Friday interview with Axios, Dario Perkins, managing director of global macro at TS Lombard, said that tech companies are increasingly taking on massive debts in their race to build out AI data centers in a way that is reminiscent of the debts held by companies during the dot-com and subprime mortgage bubbles.
Perkins told Axios that he's particularly wary because the big tech companies are claiming "they don't care whether the investment has any return, because they're in a race."
"Surely that in itself is a red flag," he added.
CNBC reported on Friday that Goldman Sachs SEO David Solomon told an audience at the Italian Tech Week conference that he expected a "drawdown" in the stock market over the next year or two given that so much money has been pumped into AI ventures in such a short time.
"I think that there will be a lot of capital that’s deployed that will turn out to not deliver returns, and when that happens, people won’t feel good," he said.
Solomon wouldn't go so far as to definitively declare AI to be a bubble, but he did say some investors are "out on the risk curve because they’re excited," which is a telltale sign of a financial bubble.
According to CNBC, Amazon CEO Jeff Bezos, who was also attending Italian Tech Week, said on Friday that there was a bubble in the AI industry, although he insisted that the technology would be a major benefit for humanity.
"Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas," Bezos said of the AI industry. "And that’s also probably happening today."
Perkins made no predictions about when the AI bubble will pop, but he argued that it's definitely much closer to the end of the cycle than the beginning.
"I wouldn't touch this stuff now," he told Axios. "We're much closer to 2000 than 1995."
"Countries with lax regulations, like the US, are prime targets for these crimes," said Public Citizen's J.B. Branch.
The San Francisco-based artificial intelligence startup Anthropic revealed Wednesday that its technology has been "weaponized" by hackers to commit ransomware crimes, prompting a call by a leading consumer advocacy group for Congress to pass "enforceable safeguards" to protect the public.
Anthropic's latest Threat Intelligence Report details "several recent examples" of its artificial intelligence-powered chatbot Claude "being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills."
"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions," the company said. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."
Anthropic said the perpetrator "used AI to what we believe is an unprecedented degree" for their extortion scheme, which is being described as "vibe hacking"—the malicious use of artificial intelligence to manipulate human emotions and trust in order to carry out sophisticated cyberattacks.
"Claude Code was used to automate reconnaissance, harvesting victims' credentials and penetrating networks," the report notes. "Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands."
"Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines," the company added.
Anthropic continued:
This represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time. We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.
Anthropic said it "banned the accounts in question as soon as we discovered this operation" and "also developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future."
"To help prevent similar abuse elsewhere, we have also shared technical indicators about the attack with relevant authorities," the company added.
Anthropic's revelation followed last year's announcement by OpenAI that it had terminated ChatGPT accounts allegedly used by cybercriminals linked to China, Iran, North Korea, and Russia.
J.B. Branch, Big Tech accountability advocate at the consumer watchdog Public Citizen, said Wednesday in response to Anthropic's announcement: "Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen. One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions."
"Criminals worldwide now have a playbook to follow—and countries with lax regulations, like the US, are prime targets for these crimes since AI companies are not subject to binding federal standards and rules," Branch added. "With no public protections in place, the next wave of AI-enabled cybercrime is coming, but Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public."
More than 120 congressional bills have been proposed to regulate artificial intelligence. However, not only has the current GOP-controlled Congress has been loath to act, House Republicans recently attempted to sneak a 10-year moratorium on state-level AI regulation into the so-called One Big Beautiful Bill Act.
The Senate subsequently voted 99-1 to remove the measure from the legislation. However, the "AI Action Plan" announced last month by President Donald Trump revived much of the proposal, prompting critics to describe it as a "zombie moratorium."
Meanwhile, tech billionaires including the Winklevoss twins, who founded the Gemini cryptocurrency exchange, are pouring tens of millions of dollars into the Digital Freedom Fund super political action committee, which aims to support right-wing political candidates with pro-crytpo and pro-AI platforms.
"Big Tech learned that throwing money in politics pays off in lax regulations and less oversight," Public Citizen said Thursday. "Money in politics reforms have never been more necessary."