Report Warns Generative AI Could Turbocharge Climate Disinformation
"The climate emergency cannot be confronted while online public and political discourse is polluted by fear, hate, confusion, and conspiracy," one campaigner warned.
Members of a global coalition on Thursday released a report detailing "significant and immediate dangers" that artificial intelligence poses to the climate emergency.
"AI companies spread hype that they might save the planet, but currently they are doing just the opposite," said Michael Khoo at Friends of the Earth, part of the Climate Action Against Disinformation (CAAD) coalition. "AI companies risk turbocharging climate disinformation, and their energy use is causing a dangerous increase to overall U.S. consumption, with a corresponding increase of carbon emissions."
As AI has rapidly developed over the past year, global leaders and experts—from United Nations Secretary-General António Guterres to The Bulletin of the Atomic Scientists—have sounded the alarm about the technology furthering disinformation on all topics.
The World Economic Forum earlier this year "identified AI-generated mis- and disinformation as the world's greatest threat (followed by climate change)," notes the new CAAD report. Citing cases in Slovakia and the United States, it says that "the world is already seeing how AI is being used for political disinformation campaigns."
"AI models will allow climate disinformation professionals and the fossil fuel industry to build on their decades of disinformation campaigns."
"AI models will allow climate disinformation professionals and the fossil fuel industry to build on their decades of disinformation campaigns," the document warns. "More recent attempts, such as falsely blaming wind power as a cause of whale deaths in New Jersey or power outages in Texas, have already been effective."
The publication specifically points to potential abuse of generative artificial intelligence, systems that create content—including text, images, music, and videos—in response to prompts. It states that "generative AI will make such campaigns vastly easier, quicker, and cheaper to produce, while also enabling it to spread further and faster."
Such content can include deepfakes, audio or video of a person appearing to say something they never did. The publication highlights that "an August 2023 study focusing on climate change-related deepfakes found over a quarter of respondents across age groups were unable to identify whether videos were fake."
"Adding to this threat, social media companies have shown declining interest in stopping disinformation, reducing trust and safety team staffing," the document stresses.
Invoking an old Facebook motto, Kairos Fellowship's Nicole Sugerman said that "we must not allow another 'move fast and break things' era in tech; we've already seen how the rapid, unregulated growth of social media platforms led to previously unimaginable levels of online and offline harm and violence."
In addition to social media, the report outlines concerns about disinformation spreading via large language models (LLMs) like ChatGPT, search engines, and online advertising. Sarah Kay Wiley, director of policy at Check My Ads, noted that "we are already seeing how generative AI is being weaponized to spin up climate disinformation or copy legitimate news sites to siphon off advertising revenue."
"Adtech companies are woefully unprepared to deal with generative AI and the opaque nature of the digital advertising industry means advertisers are not in control of where their ad dollars are going," she continued. "Regulation is needed to help build transparency and accountability to ensure advertisers are able to decide whether to support AI-generated content."
Oliver Hayes at CAAD member Global Action Plan also demanded swift intervention, arguing that "the climate emergency cannot be confronted while online public and political discourse is polluted by fear, hate, confusion, and conspiracy."
"In a year when 2 billion people are heading to the polls, this represents an existential threat to climate action."
"In a year when 2 billion people are heading to the polls, this represents an existential threat to climate action," he said. "We should stop looking at AI through the 'benefit-only' analysis and recognize that, in order to secure robust democracies and equitable climate policy, we must rein in Big Tech and regulate AI."
The report features recommendations for companies, lawmakers, and regulators to boost accountability, safety, and transparency related to AI. The suggestions echo coalition letters to U.S. President Joe Biden and Senate Majority Leader Chuck Schumer (D-N.Y.), and apply to not only disinformation but also energy and water use.
The limited company statements that are available and independent research show that the proliferation of LLMs "is already causing energy use to skyrocket," which "comes on top of the highest rate of increase in U.S. energy consumption levels since the 1990s," the document notes.
On top of that, "training large language models such as GPT-3 can require millions of liters of freshwater for both cooling and electricity generation," the report explains. "This thirsty industry therefore contributes to local water scarcity in areas that are already vulnerable, and could exacerbate risk and intensity of water stress and drought with greater computing demands."
Greenpeace USA senior strategist Charlie Cray said that "the skyrocketing use of electricity and water, combined with its ability to rapidly spread disinformation, makes AI one of the greatest emerging climate threat-multipliers."
"Governments and companies must stop pretending that increasing equipment efficiencies and directing AI tools towards weather disaster responses are enough to mitigate AI's contribution to the climate emergency," he added.