SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
A graphic represents artificial intelligence.
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development.
“History repeats itself, first as tragedy, then as farce.” Marx’s aphorism feels newly prescient. Last week, the U.S. Department of Energy issued a jingoistic call on social media for a “new Manhattan Project,” this time to win the so-called race for artificial intelligence supremacy.
But the Manhattan Project is no blueprint. It is a warning—a cautionary tale of what happens when science is conscripted into the service of state power, when open inquiry gives way to nationalist rivalry, and when the cult of progress is severed from ethical responsibility. It shows how secrecy breeds fear, corrodes public trust, and undermines democratic institutions.
The Manhattan Project may have been, as President Harry Truman claimed, “the greatest scientific gamble in history.” But it also represented a gamble with the continuity of life on Earth. It brought the world to the brink of annihilation—an abyss into which we still peer. A second such project may well push us over the edge.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals.
The parallels between the origins of the atomic age and the rise of artificial intelligence are striking. In both, the very individuals at the forefront of technological innovation were also among the first to sound the alarm.
During World War II, atomic scientists raised concerns about the militarization of nuclear energy. Yet, their dissent was suppressed under the strictures of wartime secrecy, and their continued participation was justified by the perceived imperative to build the bomb before Nazi Germany. In reality, that threat had largely subsided by the time the Manhattan Project gathered momentum, as Germany had already abandoned its efforts to develop a nuclear weapon.
The first technical study assessing the feasibility of the bomb concluded that it could indeed be built but warned that “owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon…”
When in 1942 scientists theorized that the first atomic chain reaction might ignite the atmosphere, Arthur Holly Compton recalled thinking that if such a risk proved real, then “these bombs must never be made… better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind.”
Leo Szilard drafted a petition urging President Truman to refrain from using it against Japan. He warned that such bombings would be both morally indefensible and strategically shortsighted: “A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” he wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.”
Today, we cannot hide behind the pretext of world war. We cannot claim ignorance. Nor can we invoke the specter of an existential adversary. The warnings surrounding artificial intelligence are clear, public, and unequivocal.
In 2014, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” In more recent years, Geoffrey Hinton, referred to as the “godfather of AI,” resigned from Google while citing mounting concerns about the “existential risk” posed by unchecked AI development. Soon after, a coalition of researchers and industry leaders issued a joint statement asserting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Around this time, an open letter, signed by over a thousand experts and tens of thousands of others, called for a temporary pause on AI development to reflect on its trajectory and long-term consequences.
Yet the race to develop ever more powerful artificial intelligence continues unabated, propelled less by foresight than by fear that halting progress would mean falling behind rivals, particularly China. But in the face of such profound risks, one must ask: win what, exactly?
Reflecting on the similar failure to confront the perils of technological advancement in his own time, Albert Einstein warned, “The unleashed power of the atom has changed everything except our mode of thinking, and thus we drift toward unparalleled catastrophe.” His words remain no less urgent today.
The lesson should be obvious: We cannot afford to repeat the mistakes of the atomic age. To invoke the Manhattan Project as a model for AI development is not only historically ignorant but also politically reckless.
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development, one that prioritizes human needs, centers dignity and justice, and advances the collective well-being of all.
More than 30 years ago, Daniel Ellsberg, former nuclear war planner turned whistleblower, called for a different kind of Manhattan Project. One not to build new weapons, but to undo the harm of the first and to dismantle the doomsday machines that we already have. That vision remains the only rational and morally defensible Manhattan Project worth pursuing.
We cannot afford to recognize and act upon this only in hindsight, as was the case with the atomic bomb. As Joseph Rotblat, the sole scientist to resign from the Project on ethical grounds, reflected on their collective failure:
The nuclear age is the creation of scientists… in total disregard for the basic tenets of science… openness and universality. It was conceived in secrecy, and usurped—even before birth—by one state to give it political dominance. With such congenital defects, and being nurtured by an army of Dr. Strangeloves, it is no wonder that the creation grew into a monster… We, scientists, have a great deal to answer for.
If the path we are on leads to disaster, the answer is not to accelerate. As physicians Bernard Lown and Evgeni Chazov warned during the height of the Cold War arms race: “When racing toward a precipice, it is progress to stop.”
We must stop not out of opposition to progress, but to pursue a different kind of progress: one rooted in scientific ethics, a respect for humanity, and a commitment to our collective survival.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals. As those most intimately familiar with this technology have warned, there can be no victory in this race, only an acceleration of a shared catastrophe.
We have thus far narrowly survived the nuclear age. But if we fail to heed its lessons and forsake our own human intelligence, we may not survive the age of artificial intelligence.
Trump and Musk are on an unconstitutional rampage, aiming for virtually every corner of the federal government. These two right-wing billionaires are targeting nurses, scientists, teachers, daycare providers, judges, veterans, air traffic controllers, and nuclear safety inspectors. No one is safe. The food stamps program, Social Security, Medicare, and Medicaid are next. It’s an unprecedented disaster and a five-alarm fire, but there will be a reckoning. The people did not vote for this. The American people do not want this dystopian hellscape that hides behind claims of “efficiency.” Still, in reality, it is all a giveaway to corporate interests and the libertarian dreams of far-right oligarchs like Musk. Common Dreams is playing a vital role by reporting day and night on this orgy of corruption and greed, as well as what everyday people can do to organize and fight back. As a people-powered nonprofit news outlet, we cover issues the corporate media never will, but we can only continue with our readers’ support. |
“History repeats itself, first as tragedy, then as farce.” Marx’s aphorism feels newly prescient. Last week, the U.S. Department of Energy issued a jingoistic call on social media for a “new Manhattan Project,” this time to win the so-called race for artificial intelligence supremacy.
But the Manhattan Project is no blueprint. It is a warning—a cautionary tale of what happens when science is conscripted into the service of state power, when open inquiry gives way to nationalist rivalry, and when the cult of progress is severed from ethical responsibility. It shows how secrecy breeds fear, corrodes public trust, and undermines democratic institutions.
The Manhattan Project may have been, as President Harry Truman claimed, “the greatest scientific gamble in history.” But it also represented a gamble with the continuity of life on Earth. It brought the world to the brink of annihilation—an abyss into which we still peer. A second such project may well push us over the edge.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals.
The parallels between the origins of the atomic age and the rise of artificial intelligence are striking. In both, the very individuals at the forefront of technological innovation were also among the first to sound the alarm.
During World War II, atomic scientists raised concerns about the militarization of nuclear energy. Yet, their dissent was suppressed under the strictures of wartime secrecy, and their continued participation was justified by the perceived imperative to build the bomb before Nazi Germany. In reality, that threat had largely subsided by the time the Manhattan Project gathered momentum, as Germany had already abandoned its efforts to develop a nuclear weapon.
The first technical study assessing the feasibility of the bomb concluded that it could indeed be built but warned that “owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon…”
When in 1942 scientists theorized that the first atomic chain reaction might ignite the atmosphere, Arthur Holly Compton recalled thinking that if such a risk proved real, then “these bombs must never be made… better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind.”
Leo Szilard drafted a petition urging President Truman to refrain from using it against Japan. He warned that such bombings would be both morally indefensible and strategically shortsighted: “A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” he wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.”
Today, we cannot hide behind the pretext of world war. We cannot claim ignorance. Nor can we invoke the specter of an existential adversary. The warnings surrounding artificial intelligence are clear, public, and unequivocal.
In 2014, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” In more recent years, Geoffrey Hinton, referred to as the “godfather of AI,” resigned from Google while citing mounting concerns about the “existential risk” posed by unchecked AI development. Soon after, a coalition of researchers and industry leaders issued a joint statement asserting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Around this time, an open letter, signed by over a thousand experts and tens of thousands of others, called for a temporary pause on AI development to reflect on its trajectory and long-term consequences.
Yet the race to develop ever more powerful artificial intelligence continues unabated, propelled less by foresight than by fear that halting progress would mean falling behind rivals, particularly China. But in the face of such profound risks, one must ask: win what, exactly?
Reflecting on the similar failure to confront the perils of technological advancement in his own time, Albert Einstein warned, “The unleashed power of the atom has changed everything except our mode of thinking, and thus we drift toward unparalleled catastrophe.” His words remain no less urgent today.
The lesson should be obvious: We cannot afford to repeat the mistakes of the atomic age. To invoke the Manhattan Project as a model for AI development is not only historically ignorant but also politically reckless.
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development, one that prioritizes human needs, centers dignity and justice, and advances the collective well-being of all.
More than 30 years ago, Daniel Ellsberg, former nuclear war planner turned whistleblower, called for a different kind of Manhattan Project. One not to build new weapons, but to undo the harm of the first and to dismantle the doomsday machines that we already have. That vision remains the only rational and morally defensible Manhattan Project worth pursuing.
We cannot afford to recognize and act upon this only in hindsight, as was the case with the atomic bomb. As Joseph Rotblat, the sole scientist to resign from the Project on ethical grounds, reflected on their collective failure:
The nuclear age is the creation of scientists… in total disregard for the basic tenets of science… openness and universality. It was conceived in secrecy, and usurped—even before birth—by one state to give it political dominance. With such congenital defects, and being nurtured by an army of Dr. Strangeloves, it is no wonder that the creation grew into a monster… We, scientists, have a great deal to answer for.
If the path we are on leads to disaster, the answer is not to accelerate. As physicians Bernard Lown and Evgeni Chazov warned during the height of the Cold War arms race: “When racing toward a precipice, it is progress to stop.”
We must stop not out of opposition to progress, but to pursue a different kind of progress: one rooted in scientific ethics, a respect for humanity, and a commitment to our collective survival.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals. As those most intimately familiar with this technology have warned, there can be no victory in this race, only an acceleration of a shared catastrophe.
We have thus far narrowly survived the nuclear age. But if we fail to heed its lessons and forsake our own human intelligence, we may not survive the age of artificial intelligence.
“History repeats itself, first as tragedy, then as farce.” Marx’s aphorism feels newly prescient. Last week, the U.S. Department of Energy issued a jingoistic call on social media for a “new Manhattan Project,” this time to win the so-called race for artificial intelligence supremacy.
But the Manhattan Project is no blueprint. It is a warning—a cautionary tale of what happens when science is conscripted into the service of state power, when open inquiry gives way to nationalist rivalry, and when the cult of progress is severed from ethical responsibility. It shows how secrecy breeds fear, corrodes public trust, and undermines democratic institutions.
The Manhattan Project may have been, as President Harry Truman claimed, “the greatest scientific gamble in history.” But it also represented a gamble with the continuity of life on Earth. It brought the world to the brink of annihilation—an abyss into which we still peer. A second such project may well push us over the edge.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals.
The parallels between the origins of the atomic age and the rise of artificial intelligence are striking. In both, the very individuals at the forefront of technological innovation were also among the first to sound the alarm.
During World War II, atomic scientists raised concerns about the militarization of nuclear energy. Yet, their dissent was suppressed under the strictures of wartime secrecy, and their continued participation was justified by the perceived imperative to build the bomb before Nazi Germany. In reality, that threat had largely subsided by the time the Manhattan Project gathered momentum, as Germany had already abandoned its efforts to develop a nuclear weapon.
The first technical study assessing the feasibility of the bomb concluded that it could indeed be built but warned that “owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon…”
When in 1942 scientists theorized that the first atomic chain reaction might ignite the atmosphere, Arthur Holly Compton recalled thinking that if such a risk proved real, then “these bombs must never be made… better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind.”
Leo Szilard drafted a petition urging President Truman to refrain from using it against Japan. He warned that such bombings would be both morally indefensible and strategically shortsighted: “A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” he wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.”
Today, we cannot hide behind the pretext of world war. We cannot claim ignorance. Nor can we invoke the specter of an existential adversary. The warnings surrounding artificial intelligence are clear, public, and unequivocal.
In 2014, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” In more recent years, Geoffrey Hinton, referred to as the “godfather of AI,” resigned from Google while citing mounting concerns about the “existential risk” posed by unchecked AI development. Soon after, a coalition of researchers and industry leaders issued a joint statement asserting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Around this time, an open letter, signed by over a thousand experts and tens of thousands of others, called for a temporary pause on AI development to reflect on its trajectory and long-term consequences.
Yet the race to develop ever more powerful artificial intelligence continues unabated, propelled less by foresight than by fear that halting progress would mean falling behind rivals, particularly China. But in the face of such profound risks, one must ask: win what, exactly?
Reflecting on the similar failure to confront the perils of technological advancement in his own time, Albert Einstein warned, “The unleashed power of the atom has changed everything except our mode of thinking, and thus we drift toward unparalleled catastrophe.” His words remain no less urgent today.
The lesson should be obvious: We cannot afford to repeat the mistakes of the atomic age. To invoke the Manhattan Project as a model for AI development is not only historically ignorant but also politically reckless.
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development, one that prioritizes human needs, centers dignity and justice, and advances the collective well-being of all.
More than 30 years ago, Daniel Ellsberg, former nuclear war planner turned whistleblower, called for a different kind of Manhattan Project. One not to build new weapons, but to undo the harm of the first and to dismantle the doomsday machines that we already have. That vision remains the only rational and morally defensible Manhattan Project worth pursuing.
We cannot afford to recognize and act upon this only in hindsight, as was the case with the atomic bomb. As Joseph Rotblat, the sole scientist to resign from the Project on ethical grounds, reflected on their collective failure:
The nuclear age is the creation of scientists… in total disregard for the basic tenets of science… openness and universality. It was conceived in secrecy, and usurped—even before birth—by one state to give it political dominance. With such congenital defects, and being nurtured by an army of Dr. Strangeloves, it is no wonder that the creation grew into a monster… We, scientists, have a great deal to answer for.
If the path we are on leads to disaster, the answer is not to accelerate. As physicians Bernard Lown and Evgeni Chazov warned during the height of the Cold War arms race: “When racing toward a precipice, it is progress to stop.”
We must stop not out of opposition to progress, but to pursue a different kind of progress: one rooted in scientific ethics, a respect for humanity, and a commitment to our collective survival.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals. As those most intimately familiar with this technology have warned, there can be no victory in this race, only an acceleration of a shared catastrophe.
We have thus far narrowly survived the nuclear age. But if we fail to heed its lessons and forsake our own human intelligence, we may not survive the age of artificial intelligence.