A hooded, shadowy figure is set against a backdrop of computer code in this animated image.

A hooded, shadowy figure is set against a backdrop of computer code in this animated image.

(Photo: Comparitech/flickr/cc)

'Nightmare Scenario': Watchdog Says AI Cybercrime Shows Vital Need for Regulation

"Countries with lax regulations, like the US, are prime targets for these crimes," said Public Citizen's J.B. Branch.

The San Francisco-based artificial intelligence startup Anthropic revealed Wednesday that its technology has been "weaponized" by hackers to commit ransomware crimes, prompting a call by a leading consumer advocacy group for Congress to pass "enforceable safeguards" to protect the public.

Anthropic's latest Threat Intelligence Report details "several recent examples" of its artificial intelligence-powered chatbot Claude "being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills."

"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions," the company said. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."

Anthropic said the perpetrator "used AI to what we believe is an unprecedented degree" for their extortion scheme, which is being described as "vibe hacking"—the malicious use of artificial intelligence to manipulate human emotions and trust in order to carry out sophisticated cyberattacks.

"Claude Code was used to automate reconnaissance, harvesting victims' credentials and penetrating networks," the report notes. "Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands."

"Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines," the company added.

Anthropic continued:

This represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time. We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.

Anthropic said it "banned the accounts in question as soon as we discovered this operation" and "also developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future."

"To help prevent similar abuse elsewhere, we have also shared technical indicators about the attack with relevant authorities," the company added.

Anthropic's revelation followed last year's announcement by OpenAI that it had terminated ChatGPT accounts allegedly used by cybercriminals linked to China, Iran, North Korea, and Russia.

J.B. Branch, Big Tech accountability advocate at the consumer watchdog Public Citizen, said Wednesday in response to Anthropic's announcement: "Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen. One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions."

"Criminals worldwide now have a playbook to follow—and countries with lax regulations, like the US, are prime targets for these crimes since AI companies are not subject to binding federal standards and rules," Branch added. "With no public protections in place, the next wave of AI-enabled cybercrime is coming, but Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public."

More than 120 congressional bills have been proposed to regulate artificial intelligence. However, not only has the current GOP-controlled Congress has been loath to act, House Republicans recently attempted to sneak a 10-year moratorium on state-level AI regulation into the so-called One Big Beautiful Bill Act.

The Senate subsequently voted 99-1 to remove the measure from the legislation. However, the "AI Action Plan" announced last month by President Donald Trump revived much of the proposal, prompting critics to describe it as a "zombie moratorium."

Meanwhile, tech billionaires including the Winklevoss twins, who founded the Gemini cryptocurrency exchange, are pouring tens of millions of dollars into the Digital Freedom Fund super political action committee, which aims to support right-wing political candidates with pro-crytpo and pro-AI platforms.

"Big Tech learned that throwing money in politics pays off in lax regulations and less oversight," Public Citizen said Thursday. "Money in politics reforms have never been more necessary."

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.