

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," warned one critic of the arrangement.
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.
"Countries with lax regulations, like the US, are prime targets for these crimes," said Public Citizen's J.B. Branch.
The San Francisco-based artificial intelligence startup Anthropic revealed Wednesday that its technology has been "weaponized" by hackers to commit ransomware crimes, prompting a call by a leading consumer advocacy group for Congress to pass "enforceable safeguards" to protect the public.
Anthropic's latest Threat Intelligence Report details "several recent examples" of its artificial intelligence-powered chatbot Claude "being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills."
"The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions," the company said. "Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000."
Anthropic said the perpetrator "used AI to what we believe is an unprecedented degree" for their extortion scheme, which is being described as "vibe hacking"—the malicious use of artificial intelligence to manipulate human emotions and trust in order to carry out sophisticated cyberattacks.
"Claude Code was used to automate reconnaissance, harvesting victims' credentials and penetrating networks," the report notes. "Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands."
"Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines," the company added.
Anthropic continued:
This represents an evolution in AI-assisted cybercrime. Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time. We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.
Anthropic said it "banned the accounts in question as soon as we discovered this operation" and "also developed a tailored classifier (an automated screening tool), and introduced a new detection method to help us discover activity like this as quickly as possible in the future."
"To help prevent similar abuse elsewhere, we have also shared technical indicators about the attack with relevant authorities," the company added.
Anthropic's revelation followed last year's announcement by OpenAI that it had terminated ChatGPT accounts allegedly used by cybercriminals linked to China, Iran, North Korea, and Russia.
J.B. Branch, Big Tech accountability advocate at the consumer watchdog Public Citizen, said Wednesday in response to Anthropic's announcement: "Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen. One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions."
"Criminals worldwide now have a playbook to follow—and countries with lax regulations, like the US, are prime targets for these crimes since AI companies are not subject to binding federal standards and rules," Branch added. "With no public protections in place, the next wave of AI-enabled cybercrime is coming, but Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public."
More than 120 congressional bills have been proposed to regulate artificial intelligence. However, not only has the current GOP-controlled Congress has been loath to act, House Republicans recently attempted to sneak a 10-year moratorium on state-level AI regulation into the so-called One Big Beautiful Bill Act.
The Senate subsequently voted 99-1 to remove the measure from the legislation. However, the "AI Action Plan" announced last month by President Donald Trump revived much of the proposal, prompting critics to describe it as a "zombie moratorium."
Meanwhile, tech billionaires including the Winklevoss twins, who founded the Gemini cryptocurrency exchange, are pouring tens of millions of dollars into the Digital Freedom Fund super political action committee, which aims to support right-wing political candidates with pro-crytpo and pro-AI platforms.
"Big Tech learned that throwing money in politics pays off in lax regulations and less oversight," Public Citizen said Thursday. "Money in politics reforms have never been more necessary."
"This should be obvious but apparently we have to say it: Keep AI out of children's toys," said one advocacy group.
The watchdog group Public Citizen on Tuesday denounced a recently unveiled "strategic collaboration" between the toy company Mattel and the artificial intelligence firm OpenAI, maker of ChatGPT, alleging that the partnership is "reckless and dangerous."
Last week, the two companies said that they have entered into an agreement to "support AI-powered products and experiences based on Mattel's brands."
"By using OpenAI's technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety," according to the statement. They expect to announce their first shared product later this year.
Also, "Mattel will incorporate OpenAI's advanced AI tools like ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience," according to the statement.
Mattel's brands include several household names, such as Barbie, Hot Wheels, and Polly Pocket.
"This should be obvious but apparently we have to say it: Keep AI out of children's toys. Our kids should not be used as a social experiment. This partnership is reckless and dangerous. Mattel should announce immediately that it will NOT sell toys that use AI," wrote Public Citizen on X on Tuesday.
In a related but separate statement, Robert Weissman, co-president of Public Citizen, wrote on Tuesday that "endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children."
"It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm," he added.
The statement from Public Citizen is not the only instance where AI products for children have received pushback recently.
Last month, The New York Times reported that Google is rolling out its Gemini artificial intelligence chatbot for kids who have parent-managed Google accounts and are under 13. In response, a coalition led by Fairplay, a children's media and marketing industry watchdog, and the Electronic Privacy Information Center (EPIC) launched a campaign to stop the rollout.
"This decision poses serious privacy and online safety risks to young children and likely violates the Children's Online Privacy Protection Act (COPPA)," according to a statement from Fairplay and EPIC.
Citing the "substantial harm that AI chatbots like Gemini pose to children, and the absence of evidence that these products are safe for kids," the coalition sent a letter to Google CEO Sundar Pichai requesting the company suspend the rollout, and a second letter to the Federal Trade Commission requesting the FTC investigate whether Google has violated COPPA in rolling out Gemini to children under the age of 13.