

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The president is promoting a future in which accountability for the impact of artificial intelligence systems is further out of reach.
Last week, President Trump signed an executive order that proposes to challenge and dismantle a range of “cumbersome” artificial intelligence (AI) laws at state and city level in the US and replace them with a not yet defined national AI regulatory framework.
The move is supposedly an effort to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” But at what cost?
By attempting to void existing regulatory frameworks, the Trump administration is promoting a future in which many components of society are reliant on AI, but where accountability for the impact of these systems is further out of reach.
In an ideal world, a national regulatory framework governing AI would expand safeguards, transparency, and access to justice for people across the US who are harmed by algorithmic and automated systems, in a consistent and equitable way. But in the context of this administration, the reality is likely to be a regulatory vacuum.
Since taking office in January, the Trump administration has ripped up previous federal policies governing discriminatory AI, removed safeguards across government-held data, and given tech companies expansive access to sensitive personal federal data. It has also increased its financial and political stakes in the tech industry and appointed an industry leader with a personal interest in deregulation to oversee the government’s approach to AI.
We know that integrating algorithms and automation to any process creates particular risks to human rights. AI systems have led to the wrong person being imprisoned, workers mistakenly fired, and lives ended too soon, and without adequate accountability for the companies behind the tech. It is imperative that this technology is created and deployed with the utmost care.
Trump’s executive order won’t just impact AI either. It threatens to rescind federal support for internet connectivity infrastructure via the BEAD (Broadband Equity Access and Deployment) Program, and would withhold funding to states that do not revoke AI accountability laws. This could significantly restrict equitable access to affordable, reliable broadband internet services for many.
There are worthwhile, evidence-driven efforts to secure accountability for AI injustices. The administration could, for example, adopt the AI Bill of Rights. But rather than increase steps toward accountability, this executive order marks yet another step in the Trump administration’s erosion of rights.
"The only thing that can force those big companies to do more research on safety is government regulation."
Warning that the pace of development of artificial intelligence is "much faster" than he anticipated and is taking place in the absence of far-reaching regulations, the computer scientist often called the "Godfather of AI" on Friday said he believes chances are growing that AI could wipe out humanity.
Speaking to BBC Radio 4's "Today" program, Geoffrey Hinton said there is a "10% to 20%" chance AI could lead to human extinction in the next three decades.
Previously Hinton had said he saw a 10% chance of that happening.
"We've never had to deal with things more intelligent than ourselves before," Hinton explained. "And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
Hinton, who was awarded the Nobel Prize in physics this year for his research into machine learning and AI, left his job at Google last year, saying he wanted to be able to speak out more about the dangers of unregulated AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely."
He has warned that AI chatbots could be used by authoritarian leaders to manipulate the public, and said last year that "the kind of intelligence we're developing is very different from the intelligence we have."
On Friday, Hinton said he is particularly worried that "the invisible hand" of the market will not keep humans safe from a technology that surpasses their intelligence, and called for strict regulations of AI.
"Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," said Hinton.
More than 120 bills have been proposed in the U.S. Congress to regulate AI robocalls, the technology's role in national security, and other issues, while the Biden administration has taken some action to rein in AI development.
An executive order calling for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" said that "harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks." President-elect Donald Trump is expected to rescind the order.
The White House Blueprint for an AI Bill of Rights calls for safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation when AI is used, and the ability to opt out of automated systems.
But the European Union's Artificial Intelligence Act was a deemed a "failure" by rights advocates this year, after industry lobbying helped ensure the law included numerous loopholes and exemptions for law enforcement and migration authorities.
"The only thing that can force those big companies to do more research on safety," said Hinton on Friday, "is government regulation."
"Today, the OMB's guidance takes us one step further down the path of facing a technology-rich future that begins to address its harms," said Maya Wiley.
U.S. Vice President Kamala Harris announced on Thursday a Office of Management and Budget guidance regarding how the federal government will utilize new artificial intelligence tools going forward, and it received praise from some progressives.
The guidance focuses on how federal agencies can benefit from utilizing AI tools but also the risks involved in putting them to use.
"The order directed sweeping action to strengthen AI safety and security, protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more," says a White House fact sheet.
At the first-ever Global AI Summit last year, I laid out our vision for a future where AI advances the public interest.
To help build that future, I am announcing our first government-wide policy to promote the safe, secure, and responsible use of AI. https://t.co/6NPXLWn8Oc
— Vice President Kamala Harris (@VP) March 28, 2024
The guidance says all federal agencies will now have a senior leader in charge of the use of AI tools, agencies will have to publicly report how they're using AI, agencies will be required to create "concrete safeguards" to protect the rights of citizens, and more.
Damon T. Hewitt, president and executive director of the Lawyers' Committee for Civil Rights Under Law, called it "a significant step to implement meaningful safeguards on the government's use of artificial intelligence."
Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, said it's necessary to make sure technology "serves us," rather than "harms us," and it should "advance our democracy rather than disrupt it."
"Today, the OMB's guidance takes us one step further down the path of facing a technology-rich future that begins to address its harms," Wiley said. "The guidance puts rights-protecting principles of the White House's historic AI Bill of Rights into practice across agencies, and it is an important step in advancing civil rights protections in AI deployment at federal agencies. It extends existing civil rights protections, helping to bring them into the era of AI."
The Biden administration released an AI Bill of Rights blueprint in 2022, which is an outline for how new AI tools should be utilized and developed to protect consumers. It also secured a voluntary AI safeguard agreement with seven major AI developers in July of last year.