
A growing number of experts are calling for a pause on advanced AI development and deployment.
As White House Unveils Plan, A Fresh Call to Halt 'Runaway Corporate AI'
"President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies," Public Citizen's Robert Weissman argued.
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According to The New York Times, President Joe Biden "briefly" dropped in on the meeting.
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just four days to go in our Spring Campaign, we are not even halfway to our goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According to The New York Times, President Joe Biden "briefly" dropped in on the meeting.
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
- White House Action to Rein In AI a Good First Step, But More Is Needed ›
- Experts Demand 'Pause' on Spread of Artificial Intelligence Until Regulations Imposed ›
- Experts Fear Future AI Could Cause 'Nuclear-Level Catastrophe' ›
- Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout ›
- White House Deal for Voluntary AI Safeguards Called 'Great Step' But 'Not Enough' ›
- Opinion | Mainstream Media Gives Congress an Unearned Redemption Arc on AI | Common Dreams ›
- Markey, Jayapal Lead Call for Biden to Include 'AI Bill of Rights' in Executive Order ›
As the White House on Thursday unveiled a plan meant to promote "responsible American innovation in artificial intelligence," a leading U.S. consumer advocate added his voice to the growing number of experts calling for a moratorium on the development and deployment of advanced AI technology.
"Today's announcement from the White House is a useful step forward, but much more is needed to address the threats of runaway corporate AI," Robert Weissman, president of the consumer advocacy group Public Citizen, said in a statement.
"But we also need more aggressive measures," Weissman asserted. "President Biden should call for, and Congress should legislate, a moratorium on the deployment of new generative AI technologies, to remain in effect until there is a robust regulatory framework in place to address generative AI's enormous risks."
The White House says its AI plan builds on steps the Biden administration has taken "to promote responsible innovation."
"These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year," the administration said.
The White House plan includes $140 million in National Science Foundation funding for seven new national AI research institutes—there are already 25 such facilities—that "catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good."
The new plan also includes "an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems."
Representatives of some of those companies including Google, Microsoft, Anthropic, and OpenAI—creator of the popular ChatGPT chatbot—met with Vice President Kamala Harris and other administration officials at the White House on Thursday. According to The New York Times, President Joe Biden "briefly" dropped in on the meeting.
"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy," Harris said in a statement.
"The private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products," she added.
Thursday's White House meeting and plan come amid mounting concerns over the potential dangers posed by artificial intelligence on a range of issues, including military applications, life-and-death healthcare decisions, and impacts on the labor force.
In late March, tech leaders and researchers led an open letter signed by more than 27,000 experts, scholars, and others urging "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Noting that AI developers are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter asks:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?
"Such decisions must not be delegated to unelected tech leaders," the signers asserted. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Last month, Public Citizen argued that "until meaningful government safeguards are in place to protect the public from the harms of generative AI, we need a pause."
"These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the group said in a report. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment."
According to the annual AI Index Report published last month by the Stanford Institute for Human-Centered Artificial Intelligence, nearly three-quarters of researchers believe artificial intelligence "could soon lead to revolutionary social change," while 36% worry that AI decisions "could cause nuclear-level catastrophe."
- White House Action to Rein In AI a Good First Step, But More Is Needed ›
- Experts Demand 'Pause' on Spread of Artificial Intelligence Until Regulations Imposed ›
- Experts Fear Future AI Could Cause 'Nuclear-Level Catastrophe' ›
- Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout ›
- White House Deal for Voluntary AI Safeguards Called 'Great Step' But 'Not Enough' ›
- Opinion | Mainstream Media Gives Congress an Unearned Redemption Arc on AI | Common Dreams ›
- Markey, Jayapal Lead Call for Biden to Include 'AI Bill of Rights' in Executive Order ›

