SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Countries with lax regulations, like the US, are prime targets for these crimes," said Public Citizen's J.B. Branch.
The San Francisco-based artificial intelligence startup Anthropic revealed Wednesday that its technology has been "weaponized" by hackers to commit ransomware crimes, prompting a call by a leading consumer advocacy group for Congress to pass "enforceable safeguards" to protect the public.
Anthropic's latest Threat Intelligence Report details "several recent examples" of its artificial intelligence-powered chatbot Claude "being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills."
"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions," the company said. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."
Anthropic said the perpetrator "used AI to what we believe is an unprecedented degree" for their extortion scheme, which is being described as "vibe hacking"—the malicious use of artificial intelligence to manipulate human emotions and trust in order to carry out sophisticated cyberattacks.
"Claude Code was used to automate reconnaissance, harvesting victims' credentials and penetrating networks," the report notes. "Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands."
"Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines," the company added.
Anthropic continued:
This represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time. We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.
Anthropic said it "banned the accounts in question as soon as we discovered this operation" and "also developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future."
"To help prevent similar abuse elsewhere, we have also shared technical indicators about the attack with relevant authorities," the company added.
Anthropic's revelation followed last year's announcement by OpenAI that it had terminated ChatGPT accounts allegedly used by cybercriminals linked to China, Iran, North Korea, and Russia.
J.B. Branch, Big Tech accountability advocate at the consumer watchdog Public Citizen, said Wednesday in response to Anthropic's announcement: "Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen. One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions."
"Criminals worldwide now have a playbook to follow—and countries with lax regulations, like the US, are prime targets for these crimes since AI companies are not subject to binding federal standards and rules," Branch added. "With no public protections in place, the next wave of AI-enabled cybercrime is coming, but Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public."
More than 120 congressional bills have been proposed to regulate artificial intelligence. However, not only has the current GOP-controlled Congress has been loath to act, House Republicans recently attempted to sneak a 10-year moratorium on state-level AI regulation into the so-called One Big Beautiful Bill Act.
The Senate subsequently voted 99-1 to remove the measure from the legislation. However, the "AI Action Plan" announced last month by President Donald Trump revived much of the proposal, prompting critics to describe it as a "zombie moratorium."
Meanwhile, tech billionaires including the Winklevoss twins, who founded the Gemini cryptocurrency exchange, are pouring tens of millions of dollars into the Digital Freedom Fund super political action committee, which aims to support right-wing political candidates with pro-crytpo and pro-AI platforms.
"Big Tech learned that throwing money in politics pays off in lax regulations and less oversight," Public Citizen said Thursday. "Money in politics reforms have never been more necessary."
"This should be obvious but apparently we have to say it: Keep AI out of children's toys," said one advocacy group.
The watchdog group Public Citizen on Tuesday denounced a recently unveiled "strategic collaboration" between the toy company Mattel and the artificial intelligence firm OpenAI, maker of ChatGPT, alleging that the partnership is "reckless and dangerous."
Last week, the two companies said that they have entered into an agreement to "support AI-powered products and experiences based on Mattel's brands."
"By using OpenAI's technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety," according to the statement. They expect to announce their first shared product later this year.
Also, "Mattel will incorporate OpenAI's advanced AI tools like ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience," according to the statement.
Mattel's brands include several household names, such as Barbie, Hot Wheels, and Polly Pocket.
"This should be obvious but apparently we have to say it: Keep AI out of children's toys. Our kids should not be used as a social experiment. This partnership is reckless and dangerous. Mattel should announce immediately that it will NOT sell toys that use AI," wrote Public Citizen on X on Tuesday.
In a related but separate statement, Robert Weissman, co-president of Public Citizen, wrote on Tuesday that "endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children."
"It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm," he added.
The statement from Public Citizen is not the only instance where AI products for children have received pushback recently.
Last month, The New York Times reported that Google is rolling out its Gemini artificial intelligence chatbot for kids who have parent-managed Google accounts and are under 13. In response, a coalition led by Fairplay, a children's media and marketing industry watchdog, and the Electronic Privacy Information Center (EPIC) launched a campaign to stop the rollout.
"This decision poses serious privacy and online safety risks to young children and likely violates the Children's Online Privacy Protection Act (COPPA)," according to a statement from Fairplay and EPIC.
Citing the "substantial harm that AI chatbots like Gemini pose to children, and the absence of evidence that these products are safe for kids," the coalition sent a letter to Google CEO Sundar Pichai requesting the company suspend the rollout, and a second letter to the Federal Trade Commission requesting the FTC investigate whether Google has violated COPPA in rolling out Gemini to children under the age of 13.
"Many nations are looking to Israel and its use of AI in Gaza with admiration and jealousy," said one expert. "Expect to see a form of Google, Microsoft, and Amazon-backed AI in other war zones soon."
Several recent journalistic investigations—including one published Tuesday by The Associated Press—have deepened the understanding of how Israeli forces are using artificial intelligence and cloud computing systems sold by U.S. tech titans for the mass surveillance and killing of Palestinians in Gaza.
The AP's Michael Biesecker, Sam Mednick, and Garance Burke found that Israel's use of Microsoft and OpenAI technology "skyrocketed" following Hamas' October 7, 2023 attack on Israel.
"This is the first confirmation we have gotten that commercial AI models are directly being used in warfare," Heidy Khlaaf, chief artificial intelligence scientist at the AI Now Institute and a former senior safety engineer at OpenAI, which makes ChatGPT, told the AP. "The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward."
As Biesecker, Mednick, and Burke noted:
Israel's goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a "game changer" in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.
According to the AP report, Israel buys advanced AI models from OpenAI and Microsoft's Azure cloud platform. While OpenAI said it has no partnership with the Israel Defense Forces (IDF), in early 2024 the company quietly removed language from its usage policy that prohibited military use of its technology.
The AP reporters also found that Google and Amazon provide cloud computing and AI services to the IDF via Project Nimbus, a $1.2 billion contract signed in 2021. Furthermore, the IDF uses Cisco and Dell server farms or data centers. Red Hat, an independent IBM subsidiary, sells cloud computing services to the IDF. Microsoft partner Palantir Technologies also has a "strategic partnership" with Israel's military.
Google told the AP that the company is committed to creating AI "that protects people, promotes global growth, and supports national security."
However, Google recently removed from its Responsible AI principles a commitment to not use AI for the development of technology that could cause "overall harm," including weapons and surveillance.
The AP investigation follows a Washington Post probe published last month detailing how Google has been "directly assisting" the IDF and Israel's Ministry of Defense "despite the company's efforts to publicly distance itself from the country's national security apparatus after employee protests against a cloud computing contract with Israel's government."
Google fired dozens of workers following their participation in "No Tech for Apartheid" protests against the use of the company's products and services by forces accused of genocide in Gaza.
"A Google employee warned in one document that if the company didn't quickly provide more access, the military would turn instead to Google's cloud rival Amazon, which also works with Israel's government under the Nimbus contract," wrote Gerrit De Vynck, author of the Post report.
"As recently as November 2024, by which time a year of Israeli airstrikes had turned much of Gaza to rubble, documents show Israel's military was still tapping Google for its latest AI technology," De Vynck added. "Late that month, an employee requested access to the company's Gemini AI technology for the IDF, which wanted to develop its own AI assistant to process documents and audio, according to the documents."
Previous investigations have detailed how the IDF also uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before.
"In the past, there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day," former IDF Chief of Staff Aviv Kochavi told Yuval Abraham of +972 Magazine, a joint Israeli-Palestinian publication, in 2023. Another intelligence source said that Habsora has transformed the IDF into a "mass assassination factory" in which the "emphasis is on quantity and not quality" of kills.
Compounding the crisis, in the heated hours following the October 7 attack, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how junior. What's more, the officers were allowed to risk up to 20 civilian lives in each strike, and up to 500 noncombatant lives per day. Days later, that limit was lifted. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was deemed important enough. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple U.S.-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023. According to the U.K.-based airstrike monitor Airwars, the bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas' Qassam Brigades said four Israeli and three international hostages were also killed in the attack.
Then there's the mass surveillance element. Independent journalist Antony Loewenstein recently wrote for Middle East Eye that "corporate behemoths are storing massive amounts of information about every aspect of Palestinian life in Gaza, the occupied West Bank, and elsewhere."
"How this data will be used, in a time of war and mass surveillance, is obvious," Loewenstein continued. "Israel is building a huge database, Chinese-state style, on every Palestinian under occupation: what they do, where they go, who they see, what they like, what they want, what they fear, and what they post online."
"Palestinians are guinea pigs—but this ideology and work doesn't stay in Palestine," he said. "Silicon Valley has taken note, and the new Trump era is heralding an ever-tighter alliance among Big Tech, Israel, and the defense sector. There's money to be made, as AI currently operates in a regulation-free zone globally."
"Think about how many other states, both democratic and dictatorial, would love to have such extensive information about every citizen, making it far easier to target critics, dissidents, and opponents," Loewenstein added. "With the
far right on the march globally—from Austria to Sweden, France to Germany, and the U.S. to Britain—Israel's ethno-nationalist model is seen as attractive and worth mimicking.