Feb 20, 2020
A new analysis of 6.5 million tweets from the days before and after U.S. President Donald Trump announced his intention to ditch the Paris agreement in June 2017 suggests that automated Twitter bots are substantially contributing to the spread of online misinformation about the climate crisis.
Brown University researchers "found that bots tended to applaud the president for his actions and spread misinformation about the science," according to the Guardian, which first reported on the draft study Friday. "Bots are a type of software that can be directed to autonomously tweet, retweet, like, or direct message on Twitter, under the guise of a human-fronted account."
As the Guardian summarized:
On an average day during the period studied, 25% of all tweets about the climate crisis came from bots. This proportion was higher in certain topics--bots were responsible for 38% of tweets about "fake science" and 28% of all tweets about the petroleum giant Exxon.
Conversely, tweets that could be categorized as online activism to support action on the climate crisis featured very few bots, at about 5% prevalence. The findings "suggest that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump's announcement or skeptical of climate science and action," the analysis states.
More broadly, the study adds, "these findings suggest a substantial impact of mechanized bots in amplifying denialist messages about climate change, including support for Trump's withdrawal from the Paris agreement."
\u201cPro-Trump, anti-environment bots are dominating a significant part of the climate discussion on Twitter. (A better approach to anonymity would help change this.) \n https://t.co/Wugn7rZpZ9\u201d— Andrew Stroehlein (@Andrew Stroehlein) 1582299882
Thomas Marlow, the Brown Ph.D. candidate who led the study, told the Guardian that his team decided to conduct the research because they were "always kind of wondering why there's persistent levels of denial about something that the science is more or less settled on." Marlow expressed surprise that a full quarter of climate-related tweets were from bots. "I was like, 'Wow that seems really high,'" he said.
In response to the Guardian report, some climate action advocacy groups reassured followers that their tweets are written by humans:
\u201cThis tweet has been written by a human, but a QUARTER of all tweets about the #ClimateCrisis are produced by bots, according to a new study. \n\nThe result? A distortion of the online conversation to "include far more climate science denialism..."\n\nhttps://t.co/ZvRBVceuZa\u201d— Friends of the Earth (@Friends of the Earth) 1582282176
\u201c25% of tweets about #climatechange are made by bots - study by @BrownUniversity.\n\nThis can distort the public debate on this issue, especially as bots often spread #climatedenialism\n\n@EWGnetwork can proudly promise all our tweets are human-made.\n\nhttps://t.co/Ff25zbu18l\u201d— Energy Watch Group (@Energy Watch Group) 1582280055
Other climate organizations that shared the Guardian's report on Twitter weren't surprised by the results of the new research:
\u201cimagine our shock\n\n#ExxonKnew\u201d— Greenpeace EU (@Greenpeace EU) 1582280978
\u201c#ClimateCrisis denial is largely BOTS! In news that won't surprise anyone with Twitter, fake accounts are driving #climatedenial. \n\nAnother tool used by the mega rich to stop us coming together to drive change for a fair, safe future. \n\n#ClimateChange\n\nhttps://t.co/ECBtYN5AMy\u201d— Extinction Rebellion Guildford (@Extinction Rebellion Guildford) 1582286347
"The Brown University study wasn't able to identify any individuals or groups behind the battalion of Twitter bots, nor ascertain the level of influence they have had around the often fraught climate debate," the Guardian noted. "However, a number of suspected bots that have consistently disparaged climate science and activists have large numbers of followers on Twitter."
Cognitive scientist John Cook, who has studied online climate misinformation, told the Guardian that bots are "dangerous and potentially influential" because previous research has shown "not just that misinformation is convincing to people but that just the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts."
As Cook, a research assistant professor at the Center for Climate Change Communication at George Mason University, put it: "This is one of the most insidious and dangerous elements of misinformation spread by bots."
Naomi Oreskes is a Harvard University professor and science historian who also has studied climate misinformation, including an October 2019 report (pdf) co-authored by Cook about the fossil fuel industry's decades of efforts to mislead the American public. In a tweet Friday, Oreskes called the new research "important work" but added "I wish they'd published it before going to the media."
\u201cThis is a draft, unpublished study so take with \ud83e\uddc2 but it is an amazing finding on the role of automation to amplify messages on climate on social media.\n\nBots are used mostly in tweets critical of action or supportive of Trump -->> \n\nhttps://t.co/4PKCocn4dN\u201d— Ketan Joshi (@Ketan Joshi) 1582283764
The Guardian report on Marlow and his colleagues' analysis came just a few months after the Trump administration formally began the one-year process of withdrawing from the Paris accord, which critics said sent "a signal to the world that there will be no leadership from the U.S. federal government on the climate crisis--a catastrophic message in a moment of great urgency."
The findings also came about a month after the Bulletin of Atomic Scientists issued a historic warning about the risk of global catastrophe by setting the Doomsday Clock at 100 seconds to midnight. The bulletin warned in its statement announcing the clock's new time that "humanity continues to face two simultaneous existential dangers--nuclear war and climate change--that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society's ability to respond."
"Focused attention is needed to prevent information technology from undermining public trust in political institutions, in the media, and in the existence of objective reality itself," the bulletin added. "Cyber-enabled information warfare is a threat to the common good. Deception campaigns--and leaders intent on blurring the line between fact and politically motivated fantasy--are a profound threat to effective democracies, reducing their ability to address nuclear weapons, climate change, and other existential dangers."
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
A new analysis of 6.5 million tweets from the days before and after U.S. President Donald Trump announced his intention to ditch the Paris agreement in June 2017 suggests that automated Twitter bots are substantially contributing to the spread of online misinformation about the climate crisis.
Brown University researchers "found that bots tended to applaud the president for his actions and spread misinformation about the science," according to the Guardian, which first reported on the draft study Friday. "Bots are a type of software that can be directed to autonomously tweet, retweet, like, or direct message on Twitter, under the guise of a human-fronted account."
As the Guardian summarized:
On an average day during the period studied, 25% of all tweets about the climate crisis came from bots. This proportion was higher in certain topics--bots were responsible for 38% of tweets about "fake science" and 28% of all tweets about the petroleum giant Exxon.
Conversely, tweets that could be categorized as online activism to support action on the climate crisis featured very few bots, at about 5% prevalence. The findings "suggest that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump's announcement or skeptical of climate science and action," the analysis states.
More broadly, the study adds, "these findings suggest a substantial impact of mechanized bots in amplifying denialist messages about climate change, including support for Trump's withdrawal from the Paris agreement."
\u201cPro-Trump, anti-environment bots are dominating a significant part of the climate discussion on Twitter. (A better approach to anonymity would help change this.) \n https://t.co/Wugn7rZpZ9\u201d— Andrew Stroehlein (@Andrew Stroehlein) 1582299882
Thomas Marlow, the Brown Ph.D. candidate who led the study, told the Guardian that his team decided to conduct the research because they were "always kind of wondering why there's persistent levels of denial about something that the science is more or less settled on." Marlow expressed surprise that a full quarter of climate-related tweets were from bots. "I was like, 'Wow that seems really high,'" he said.
In response to the Guardian report, some climate action advocacy groups reassured followers that their tweets are written by humans:
\u201cThis tweet has been written by a human, but a QUARTER of all tweets about the #ClimateCrisis are produced by bots, according to a new study. \n\nThe result? A distortion of the online conversation to "include far more climate science denialism..."\n\nhttps://t.co/ZvRBVceuZa\u201d— Friends of the Earth (@Friends of the Earth) 1582282176
\u201c25% of tweets about #climatechange are made by bots - study by @BrownUniversity.\n\nThis can distort the public debate on this issue, especially as bots often spread #climatedenialism\n\n@EWGnetwork can proudly promise all our tweets are human-made.\n\nhttps://t.co/Ff25zbu18l\u201d— Energy Watch Group (@Energy Watch Group) 1582280055
Other climate organizations that shared the Guardian's report on Twitter weren't surprised by the results of the new research:
\u201cimagine our shock\n\n#ExxonKnew\u201d— Greenpeace EU (@Greenpeace EU) 1582280978
\u201c#ClimateCrisis denial is largely BOTS! In news that won't surprise anyone with Twitter, fake accounts are driving #climatedenial. \n\nAnother tool used by the mega rich to stop us coming together to drive change for a fair, safe future. \n\n#ClimateChange\n\nhttps://t.co/ECBtYN5AMy\u201d— Extinction Rebellion Guildford (@Extinction Rebellion Guildford) 1582286347
"The Brown University study wasn't able to identify any individuals or groups behind the battalion of Twitter bots, nor ascertain the level of influence they have had around the often fraught climate debate," the Guardian noted. "However, a number of suspected bots that have consistently disparaged climate science and activists have large numbers of followers on Twitter."
Cognitive scientist John Cook, who has studied online climate misinformation, told the Guardian that bots are "dangerous and potentially influential" because previous research has shown "not just that misinformation is convincing to people but that just the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts."
As Cook, a research assistant professor at the Center for Climate Change Communication at George Mason University, put it: "This is one of the most insidious and dangerous elements of misinformation spread by bots."
Naomi Oreskes is a Harvard University professor and science historian who also has studied climate misinformation, including an October 2019 report (pdf) co-authored by Cook about the fossil fuel industry's decades of efforts to mislead the American public. In a tweet Friday, Oreskes called the new research "important work" but added "I wish they'd published it before going to the media."
\u201cThis is a draft, unpublished study so take with \ud83e\uddc2 but it is an amazing finding on the role of automation to amplify messages on climate on social media.\n\nBots are used mostly in tweets critical of action or supportive of Trump -->> \n\nhttps://t.co/4PKCocn4dN\u201d— Ketan Joshi (@Ketan Joshi) 1582283764
The Guardian report on Marlow and his colleagues' analysis came just a few months after the Trump administration formally began the one-year process of withdrawing from the Paris accord, which critics said sent "a signal to the world that there will be no leadership from the U.S. federal government on the climate crisis--a catastrophic message in a moment of great urgency."
The findings also came about a month after the Bulletin of Atomic Scientists issued a historic warning about the risk of global catastrophe by setting the Doomsday Clock at 100 seconds to midnight. The bulletin warned in its statement announcing the clock's new time that "humanity continues to face two simultaneous existential dangers--nuclear war and climate change--that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society's ability to respond."
"Focused attention is needed to prevent information technology from undermining public trust in political institutions, in the media, and in the existence of objective reality itself," the bulletin added. "Cyber-enabled information warfare is a threat to the common good. Deception campaigns--and leaders intent on blurring the line between fact and politically motivated fantasy--are a profound threat to effective democracies, reducing their ability to address nuclear weapons, climate change, and other existential dangers."
A new analysis of 6.5 million tweets from the days before and after U.S. President Donald Trump announced his intention to ditch the Paris agreement in June 2017 suggests that automated Twitter bots are substantially contributing to the spread of online misinformation about the climate crisis.
Brown University researchers "found that bots tended to applaud the president for his actions and spread misinformation about the science," according to the Guardian, which first reported on the draft study Friday. "Bots are a type of software that can be directed to autonomously tweet, retweet, like, or direct message on Twitter, under the guise of a human-fronted account."
As the Guardian summarized:
On an average day during the period studied, 25% of all tweets about the climate crisis came from bots. This proportion was higher in certain topics--bots were responsible for 38% of tweets about "fake science" and 28% of all tweets about the petroleum giant Exxon.
Conversely, tweets that could be categorized as online activism to support action on the climate crisis featured very few bots, at about 5% prevalence. The findings "suggest that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump's announcement or skeptical of climate science and action," the analysis states.
More broadly, the study adds, "these findings suggest a substantial impact of mechanized bots in amplifying denialist messages about climate change, including support for Trump's withdrawal from the Paris agreement."
\u201cPro-Trump, anti-environment bots are dominating a significant part of the climate discussion on Twitter. (A better approach to anonymity would help change this.) \n https://t.co/Wugn7rZpZ9\u201d— Andrew Stroehlein (@Andrew Stroehlein) 1582299882
Thomas Marlow, the Brown Ph.D. candidate who led the study, told the Guardian that his team decided to conduct the research because they were "always kind of wondering why there's persistent levels of denial about something that the science is more or less settled on." Marlow expressed surprise that a full quarter of climate-related tweets were from bots. "I was like, 'Wow that seems really high,'" he said.
In response to the Guardian report, some climate action advocacy groups reassured followers that their tweets are written by humans:
\u201cThis tweet has been written by a human, but a QUARTER of all tweets about the #ClimateCrisis are produced by bots, according to a new study. \n\nThe result? A distortion of the online conversation to "include far more climate science denialism..."\n\nhttps://t.co/ZvRBVceuZa\u201d— Friends of the Earth (@Friends of the Earth) 1582282176
\u201c25% of tweets about #climatechange are made by bots - study by @BrownUniversity.\n\nThis can distort the public debate on this issue, especially as bots often spread #climatedenialism\n\n@EWGnetwork can proudly promise all our tweets are human-made.\n\nhttps://t.co/Ff25zbu18l\u201d— Energy Watch Group (@Energy Watch Group) 1582280055
Other climate organizations that shared the Guardian's report on Twitter weren't surprised by the results of the new research:
\u201cimagine our shock\n\n#ExxonKnew\u201d— Greenpeace EU (@Greenpeace EU) 1582280978
\u201c#ClimateCrisis denial is largely BOTS! In news that won't surprise anyone with Twitter, fake accounts are driving #climatedenial. \n\nAnother tool used by the mega rich to stop us coming together to drive change for a fair, safe future. \n\n#ClimateChange\n\nhttps://t.co/ECBtYN5AMy\u201d— Extinction Rebellion Guildford (@Extinction Rebellion Guildford) 1582286347
"The Brown University study wasn't able to identify any individuals or groups behind the battalion of Twitter bots, nor ascertain the level of influence they have had around the often fraught climate debate," the Guardian noted. "However, a number of suspected bots that have consistently disparaged climate science and activists have large numbers of followers on Twitter."
Cognitive scientist John Cook, who has studied online climate misinformation, told the Guardian that bots are "dangerous and potentially influential" because previous research has shown "not just that misinformation is convincing to people but that just the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts."
As Cook, a research assistant professor at the Center for Climate Change Communication at George Mason University, put it: "This is one of the most insidious and dangerous elements of misinformation spread by bots."
Naomi Oreskes is a Harvard University professor and science historian who also has studied climate misinformation, including an October 2019 report (pdf) co-authored by Cook about the fossil fuel industry's decades of efforts to mislead the American public. In a tweet Friday, Oreskes called the new research "important work" but added "I wish they'd published it before going to the media."
\u201cThis is a draft, unpublished study so take with \ud83e\uddc2 but it is an amazing finding on the role of automation to amplify messages on climate on social media.\n\nBots are used mostly in tweets critical of action or supportive of Trump -->> \n\nhttps://t.co/4PKCocn4dN\u201d— Ketan Joshi (@Ketan Joshi) 1582283764
The Guardian report on Marlow and his colleagues' analysis came just a few months after the Trump administration formally began the one-year process of withdrawing from the Paris accord, which critics said sent "a signal to the world that there will be no leadership from the U.S. federal government on the climate crisis--a catastrophic message in a moment of great urgency."
The findings also came about a month after the Bulletin of Atomic Scientists issued a historic warning about the risk of global catastrophe by setting the Doomsday Clock at 100 seconds to midnight. The bulletin warned in its statement announcing the clock's new time that "humanity continues to face two simultaneous existential dangers--nuclear war and climate change--that are compounded by a threat multiplier, cyber-enabled information warfare, that undercuts society's ability to respond."
"Focused attention is needed to prevent information technology from undermining public trust in political institutions, in the media, and in the existence of objective reality itself," the bulletin added. "Cyber-enabled information warfare is a threat to the common good. Deception campaigns--and leaders intent on blurring the line between fact and politically motivated fantasy--are a profound threat to effective democracies, reducing their ability to address nuclear weapons, climate change, and other existential dangers."
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.