

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

OpenAI makes ChatGPT, the world's most popular chatbot.
"Given the use of AI systems in the targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words," warned one policy analyst.
ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.
ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI.The Intercept's Sam Biddle reported Friday that prior to Wednesday, OpenAI's permissible uses page banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare."
Although the company's
new policy stipulates that users should not harm human beings or "develop or use weapons," experts said the removal of the "military and warfare" language leaves open the door for lucrative contracts with U.S. and other militaries.
"Given the use of AI systems in the
targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy," Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
"The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement," she added.
An OpenAI spokesperson told Common Dreams in an email that:
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with [the Defense Advanced Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under "military" in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called "killer robots," could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.
That's the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that "any decision to launch a nuclear weapon should not be made" by AI.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.
ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI.The Intercept's Sam Biddle reported Friday that prior to Wednesday, OpenAI's permissible uses page banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare."
Although the company's
new policy stipulates that users should not harm human beings or "develop or use weapons," experts said the removal of the "military and warfare" language leaves open the door for lucrative contracts with U.S. and other militaries.
"Given the use of AI systems in the
targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy," Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
"The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement," she added.
An OpenAI spokesperson told Common Dreams in an email that:
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with [the Defense Advanced Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under "military" in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called "killer robots," could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.
That's the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that "any decision to launch a nuclear weapon should not be made" by AI.
ChatGPT maker OpenAI this week quietly removed language from its usage policy that prohibited military use of its technology, a move with serious implications given the increase use of artificial intelligence on battlefields including Gaza.
ChatGPT is a free tool that lets users enter prompts to receive text or images generated by AI.The Intercept's Sam Biddle reported Friday that prior to Wednesday, OpenAI's permissible uses page banned "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare."
Although the company's
new policy stipulates that users should not harm human beings or "develop or use weapons," experts said the removal of the "military and warfare" language leaves open the door for lucrative contracts with U.S. and other militaries.
"Given the use of AI systems in the
targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy," Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.
"The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement," she added.
An OpenAI spokesperson told Common Dreams in an email that:
Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with [the Defense Advanced Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under "military" in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.
As AI advances, so does its weaponization. Experts warn that AI applications including lethal autonomous weapons systems, commonly called "killer robots," could pose a potentially existential threat to humanity that underscores the imperative of arms control measures to slow the pace of weaponization.
That's the goal of nuclear weapons legislation introduced last year in the U.S. Congress. The bipartisan Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that "any decision to launch a nuclear weapon should not be made" by AI.