

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

In this photo illustration a Google logo is displayed on a smartphone with an artificial intelligence symbol on the background.
"Google will probably now work on deploying technology directly that can kill people," said one former ethical AI staffer at the tech giant.
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, told Bloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, told Wired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, told Bloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, told Wired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."
Weeks into U.S. President Donald Trump's second term, Google on Tuesday removed from its Responsible AI principles a commitment to not use artificial intelligence to develop technologies that could cause "overall harm," including weapons and surveillance—walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time: "Don't be evil."
That maxim was deleted from the company's code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018, and this week officials at the Silicon Valley giant announced they can no longer promise they'll refraining from AI weapons development.
James Manyika, senior vice president for research, technology, and society, and Demis Hassabis, CEO of the company's AI research lab DeepMind, wrote in a blog post on progress in "Responsible AI" that in "an increasingly complex geopolitical landscape... democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
"And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," they said.
Until Tuesday, Google pledged that "applications we will not pursue" with AI included weapons, surveillance, technologies that "cause or are likely to cause overall harm," and uses that violate international law and human rights.
"Is this as terrifying as it sounds?" asked one journalist and author as the mention of those applications disappeared from the campany's AI Principles page, where it had been included as recently as last week.
Margaret Mitchell, who previously co-led Google's ethical AI team, told Bloomberg that the removal of the principles "is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The company's updated AI Principles page says it will implement "appropriate human oversight" to align its work with "widely accepted principles of international law and human rights" and that it will use testing and monitoring "to mitigate unintended or harmful outcomes and avoid unfair bias."
But with Google aligning itself with the Trump administration, human rights advocate Sarah Leah Whitson of Democracy for the Arab World Now called the company a "corporate war machine" following Tuesday's announcement.
Google donated $1 million to his inaugural committee along with other tech giants and sent CEO Sundar Pichai to Trump's inauguration, where he sat next to the president's top ally in the industry, Elon Musk.
Since Trump won the election in November, tech companies have also distanced themselves from previous pledges to strive for diversity, equity, and inclusion in their hiring and workplace practices, as Trump has directly targeted DEI programs in the federal government.
"It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public," Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA, told Wired on Tuesday.
At Google, said Koul, there is still "long-standing employee sentiment that the company should not be in the business of war."