a computer generated image of artificial intelligence featuring a microchip, code, and a human-like face

The U.S., U.K., and other countries reached a deal on November 26, 2023 to ensure the "secure" design of artificial intelligence software.

(Photo: Monsitj/Getty Images)

US Among 18 Countries to Reach Deal on Keeping AI 'Secure by Design'

The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."

Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.

The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."

The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.

Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.

The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly toldReuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."

Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.

"But that's not the only area where AI can cause harm," Eisen said on social media.

Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.

"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."

At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.

Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.

"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."

European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.

"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."

Join Us: News for people demanding a better world


Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place.

We're hundreds of thousands strong, but every single supporter makes the difference.

Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. Join with us today!

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.