

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 2, 2025 in San Francisco, California.
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," warned one critic of the arrangement.
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.
Artificial intelligence giant OpenAI, maker of the popular ChatGPT chatbot, announced on Tuesday that it is restructuring as a for-profit company in a move that was quickly denounced by consumer advocacy watchdog Public Citizen.
As explained by The New York Times, OpenAI will now operate as a public benefit corporation (PBC), which the Times describes as "a for-profit corporation designed to create public and social good."
Under the terms of the agreement, the nonprofit OpenAI Foundation will hold a $130 billion stake in the new for-profit company, called OpenAI Group PBC, which the firm says will make it "one of the best resourced philanthropic organizations ever."
A source told the Times that OpenAI CEO Sam Altman "does not have a significant stake in the new for-profit company." Microsoft, OpenAI's biggest investor, will hold a $135 billion stake in OpenAI Group PBC, while the remaining shares will be held by "current and former employees and other investors," writes the Times.
Robert Weissman, co-president of Public Citizen, immediately blasted the move and warned that reassurances about the nonprofit OpenAI Foundation maintaining "control" of the project were completely empty.
"Since the November 2023 coup at OpenAI, there is no evidence whatsoever of the nonprofit exerting control over the for-profit, and only evidence of the reverse," he argued, referencing a shakeup at the company nearly two years ago, which saw Altman removed and then restored to his leadership role.
Weissman warned that OpenAI has consistently "rushed dangerous new technologies to market, in advance of competitors and without adequate safety tests and protocols."
As evidence of this, Weissman pointed to Altman's announcement that ChatGPT would soon allow for erotica for verified adults, as well as OpenAI's recent introduction of its Sora 2 AI video platform that he said "threatens to destroy social norms of truth."
"This arrangement will help entrench unaccountable leadership at OpenAI For-profit," he said. "Based on the past two years, we can expect OpenAI Foundation to leave dormant its power (and obligation) to exert control over OpenAI For-profit."
Weissman concluded that the deal to make OpenAI into a for-profit company "should not be allowed to stand" and encouraged the state attorneys general in Delaware and California to "exert their authority to dissolve OpenAI Nonprofit and reallocate its resources to new organizations in the charitable sector."
Weissman's warning about OpenAI becoming a reckless and out-of-control for-profit behemoth was echoed on Tuesday by Steven Adler, an AI researcher and former product safety leader at OpenAI.
Drawing on his experience at the firm, Adler wrote an editorial for The New York Times in which he questioned OpenAI's commitment to mitigating mental health dangers caused or exacerbated by its flagship chatbot.
"I believe OpenAI wants its products to be safe to use," Adler explained. "But it also has a history of paying too little attention to established risks. This spring, the company released—and after backlash, withdrew—an egregiously 'sycophantic' version of ChatGPT that would reinforce users' extreme delusions, like being targeted by the FBI. OpenAI later admitted to having no sycophancy tests as part of the process for deploying new models, even though those risks have been well known in AI circles since at least 2023."
Adler knocked the company for its overall lack of transparency, and he noted that both it and Google DeepMind seem to have "broken commitments related to publishing safety-testing results before a major product introduction."
Adler chalked up these problems to developing AI in a highly competitive for-profit market in which new capabilities are pushed out before safety risks are properly assessed.
"If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today," he concluded.