

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"I wouldn't touch this stuff now," warned one financial analyst about the AI industry.
Several analysts are sounding alarms about the artificial intelligence industry being a major financial bubble that could potentially tip the global economy into a severe recession.
MarketWatch reported on Friday that the MacroStrategy Partnership, an independent research firm, has published a new note claiming that the bubble generated by AI is now 17 times larger than the dot-com bubble in the late 1990s, and four times bigger than the global real-estate bubble that crashed the economy in 2008.
The note was written by a team of analysts, including Julien Garran, who previously led the commodities strategy team at multinational investment bank UBS.
Garran contends that companies have vastly overhyped the capabilities of AI large language models (LLMs), and he pointed to data showing that the adoption rate of LLMs among large businesses has already started to decline. He also thinks that flagship LLM ChatGPT may have "hit a wall" with its latest release, which he said hasn't delivered noticeably better performance than previous releases, despite costing 10 times as much.
The consequences for the economy, he warns, could be dire.
"The danger is not only that this pushes us into a zone 4 deflationary bust on our investment clock, but that it also makes it hard for the Fed and the Trump administration to stimulate the economy out of it," he writes in the investment note.
Garran isn't the only analyst expressing extreme anxiety about the potential for an AI bubble to bring down the economy.
In a Friday interview with Axios, Dario Perkins, managing director of global macro at TS Lombard, said that tech companies are increasingly taking on massive debts in their race to build out AI data centers in a way that is reminiscent of the debts held by companies during the dot-com and subprime mortgage bubbles.
Perkins told Axios that he's particularly wary because the big tech companies are claiming "they don't care whether the investment has any return, because they're in a race."
"Surely that in itself is a red flag," he added.
CNBC reported on Friday that Goldman Sachs SEO David Solomon told an audience at the Italian Tech Week conference that he expected a "drawdown" in the stock market over the next year or two given that so much money has been pumped into AI ventures in such a short time.
"I think that there will be a lot of capital that’s deployed that will turn out to not deliver returns, and when that happens, people won’t feel good," he said.
Solomon wouldn't go so far as to definitively declare AI to be a bubble, but he did say some investors are "out on the risk curve because they’re excited," which is a telltale sign of a financial bubble.
According to CNBC, Amazon CEO Jeff Bezos, who was also attending Italian Tech Week, said on Friday that there was a bubble in the AI industry, although he insisted that the technology would be a major benefit for humanity.
"Investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas," Bezos said of the AI industry. "And that’s also probably happening today."
Perkins made no predictions about when the AI bubble will pop, but he argued that it's definitely much closer to the end of the cycle than the beginning.
"I wouldn't touch this stuff now," he told Axios. "We're much closer to 2000 than 1995."
"Countries with lax regulations, like the US, are prime targets for these crimes," said Public Citizen's J.B. Branch.
The San Francisco-based artificial intelligence startup Anthropic revealed Wednesday that its technology has been "weaponized" by hackers to commit ransomware crimes, prompting a call by a leading consumer advocacy group for Congress to pass "enforceable safeguards" to protect the public.
Anthropic's latest Threat Intelligence Report details "several recent examples" of its artificial intelligence-powered chatbot Claude "being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills."
"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions," the company said. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."
Anthropic said the perpetrator "used AI to what we believe is an unprecedented degree" for their extortion scheme, which is being described as "vibe hacking"—the malicious use of artificial intelligence to manipulate human emotions and trust in order to carry out sophisticated cyberattacks.
"Claude Code was used to automate reconnaissance, harvesting victims' credentials and penetrating networks," the report notes. "Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands."
"Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines," the company added.
Anthropic continued:
This represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time. We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.
Anthropic said it "banned the accounts in question as soon as we discovered this operation" and "also developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future."
"To help prevent similar abuse elsewhere, we have also shared technical indicators about the attack with relevant authorities," the company added.
Anthropic's revelation followed last year's announcement by OpenAI that it had terminated ChatGPT accounts allegedly used by cybercriminals linked to China, Iran, North Korea, and Russia.
J.B. Branch, Big Tech accountability advocate at the consumer watchdog Public Citizen, said Wednesday in response to Anthropic's announcement: "Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen. One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions."
"Criminals worldwide now have a playbook to follow—and countries with lax regulations, like the US, are prime targets for these crimes since AI companies are not subject to binding federal standards and rules," Branch added. "With no public protections in place, the next wave of AI-enabled cybercrime is coming, but Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public."
More than 120 congressional bills have been proposed to regulate artificial intelligence. However, not only has the current GOP-controlled Congress has been loath to act, House Republicans recently attempted to sneak a 10-year moratorium on state-level AI regulation into the so-called One Big Beautiful Bill Act.
The Senate subsequently voted 99-1 to remove the measure from the legislation. However, the "AI Action Plan" announced last month by President Donald Trump revived much of the proposal, prompting critics to describe it as a "zombie moratorium."
Meanwhile, tech billionaires including the Winklevoss twins, who founded the Gemini cryptocurrency exchange, are pouring tens of millions of dollars into the Digital Freedom Fund super political action committee, which aims to support right-wing political candidates with pro-crytpo and pro-AI platforms.
"Big Tech learned that throwing money in politics pays off in lax regulations and less oversight," Public Citizen said Thursday. "Money in politics reforms have never been more necessary."
The media outlets claim the company violated copyright laws.
The Intercept, Raw Story, and Alternet joined forces on Wednesday to sue OpenAI for using copyrighted content to train its generative artificial intelligence tool ChatGPT.
The law firm Loevy + Loevy is representing the publications, and it has filed the lawsuit in the Southern District of New York. The firm claims OpenAI violated the Digital Millennium Copyright Act (DMCA) by using copyrighted content from news organizations to train ChatGPT.
"Had OpenAI trained ChatGPT using these works as they were published, including author, title, and copyright information, ChatGPT may have learned to respect third-party copyrights, or at least inform ChatGPT users that it was providing responses that were based on the copyrighted works of others. Instead, OpenAI removed that information from its ChatGPT training sets, in violation of the DMCA," the firm said in a statement.
NEWS: @RawStory is suing @OpenAI, creator of #ChatGPT.
“I think it's time for tech companies to be proactive in compensating publishers for their work,” Raw Story CEO @JohnByrnester told @corbinbolies of @TheDailyBeasthttps://t.co/dVX1q1qsvA
— Raw Story (@RawStory) February 28, 2024
OpenAI is facing multiple lawsuits over its use of copyrighted material, including from comedian Sarah Silverman and The New York Times. The Times lawsuit also references violations of the DMCA. OpenAI recently claimed the Times "hacked" ChatGPT to get it to reproduce its copyrighted content.
Publications like the The Associated Press have formed partnerships with OpenAI where they license their work to the company, rather than suing them over the use of copyrighted content. According to the AI-based text analysis company Copyleaks, approximately 60% of the content generated by ChatGPT-3.5 is plagiarized.
OpenAI argues its actions fall under "fair use." In 2016, the U.S. Supreme Court let a lower court ruling stand that said Google had not violated copyright laws by digitizing millions of books, so OpenAI may have a shot at winning with that kind of argument. It remains to be seen if any of the lawsuits against the company will make their way to the Supreme Court.
"Developers like OpenAI have garnered billions in investment and revenue because of AI products fundamentally created with and trained on copyright-protected material," said Loevy + Loevy partner Matt Topic, who represents the news organizations in the suits."The Digital Millennium Copyright Act prohibits the removal of author, title, and copyright notice when there is reason to know it would conceal or facilitate copyright infringement, and unlike traditional copyright infringement claims, it does not require creators to incur the copyright registration fees that often make traditional copyright infringement suits cost prohibitive given the massive scale of OpenAI's infringement."