Media Warn of 'Russian Bots'--Despite Primary Source's Disavowal

"There's a problem in our society--systemic racism in American media--and rather than an examination of whether it's affecting coverage here, what the listener gets is yet another boilerplate story about "Russian bots." (Photo: Twitter)

Media Warn of 'Russian Bots'--Despite Primary Source's Disavowal

"Total bullshit."

One could forgive the average reader for thinking reporters covering bots had been replaced by bots. The formula is something we've seen a million times now: After a controversial story breaks, media outlets insist that "Russian bots" used the controversy to "sow discord" or "exploit tensions"; a "Russian bot dashboard" is offered as proof. (These "dashboards" let one see what Russian bots--automated online persona controlled by the Kremlin--are allegedly "pushing" on social media.)

The substance of the concern or discord is underreported or ignored altogether. Online conflict is neatly dismissed as a Kremlin psyop, the narrative of Russia interference in every aspect of our lives is reinforced, and one is reminded to be "aware" of Russian trolls online.

The goal is not to convey information or give the reader tools to better understand the world, it's to give the impression all unrest is artificially contrived by a foreign entity, and that the status quo would otherwise be rainbows and sunshine.

Note the latest iteration of this story:

  • Russian Bots Are Rallying Behind Embattled Fox News Host Laura Ingraham as Advertisers Dump Her Show (Business Insider, 4/1/16)
  • Russian Bots Defend Fox News Pundit Laura Ingraham as Advertisers Leave Following David Hogg Tweet (Newsweek, 4/2/18)
  • Russian Bots Are Tweeting Their Support of Embattled Fox News Host Laura Ingraham (Washington Post, 4/2/18)
  • Russian Bots Flock to Laura Ingraham Feud With Parkland Student: Report (The Hill, 4/2/18)
  • Russian Bots Rush to Laura Ingraham's Defense in David Hogg Feud (Washington Times, 4/2/18)

Not to be confused with the Russian bots that were heard from after the Austin bombings from last month:

  • Russian Social Accounts Adding to Complaints That Austin Bombings Aren't Being Covered (NPR, 3/19/18)
  • Fallout of Austin Bombings Exposes Racial Tensions, Russian Bots and Media Distrust (France 24, 4/1/18)
  • Russian Bots Were Sowing Discord During Hunt for Austin Bomber, Group Says (Houston Chronicle, 3/20/18)

NYT: After Florida School Shooting, Russian Bot Army Pounced

Or the bots from Russia that were seen in the wake of the Parkland massacre:

  • After Florida School Shooting, Russian 'Bot' Army Pounced (New York Times, 2/19/18)
  • After the Parkland Shooting, Pro-Russian Bots Are Pushing False-Flag Allegations Again (Washington Post, 2/16/18)
  • How Russian Trolls Exploited Parkland Mass Shooting on Social Media (Politifact, 2/22/18)

One problem, though: The "Russian bot dashboard" reporters generally cite as their primary source, Hamilton 68, effectively told reporters to stop writing these pieces six weeks ago. According to a report from Buzzfeed (2/28/18)--hardly a fan of the Kremlin--Russian bot stories are "bullshit":

By now you know the drill: massive news event happens, journalists scramble to figure out what's going on, and within a couple hours the culprit is found -- Russian bots. Russian bots were blamed for driving attention to the Nunes memo, a Republican-authored document on the Trump-Russia probe. They were blamed for pushing for Roy Moore to win in Alabama's special election. And here they are wading into the gun debate following the Parkland shooting. "[T]he messages from these automated accounts, or bots, were designed to widen the divide and make compromise even more difficult," wrote the New York Times in a story following the shooting, citing little more than "Twitter accounts suspected of having links to Russia." This is, not to mince words, total bullshit. The thing is, nearly every time you see a story blaming Russian bots for something, you can be pretty sure that the story can be traced back to a single source: the Hamilton 68 dashboard, founded by a group of respected researchers, including Clint Watts and JM Berger, and currently run under the auspices of the German Marshall Fund. But even some of the people who popularized that metric now acknowledge it's become totally overblown. "I'm not convinced on this bot thing," said Watts, the cofounder of a project that is widely cited as the main, if not only, source of information on Russian bots.

BuzzFeed: Stop Blaming Russian Bots For Everything

Watts, the media's most cited expert on so-called "Russian bots" and co-founder of Hamilton 68, says the narrative is "overdone." The three primary problems, as Buzzfeed, reported, are:

  1. The bots on the Hamilton 68 dashboard are not necessarily connected to Russia: "They are not all in Russia," Watts told Buzzfeed. "We don't even think they're all commanded in Russia--at all. We think some of them are legitimately passionate people that are just really into promoting Russia." (Hamilton 68 doesn't specify which accounts are viewed as Russian bots; that's a trade secret.)
  2. Twitter is clogged with bots, so telling which are Russian and which aren't is impossible. Bots naturally follow trending or popular stories, like all the stories cited above; how does one distinguish "Russian bot" activity versus normal online trend-chasing?
  3. Tons of bots are run out of the United States, in totally routine partisan marketing efforts; the singular obsession with Russia lets these shady players off the hook. And, again, it's almost impossible to distinguish between simply partisan GOP bots and "Russian" ones.

Put another way: These stories are of virtually no news value, other than smearing whichever side the "Russian bots" happened to support, and reinforcing in the public mind that one cannot trust unsanctioned social media accounts. Also that the Russians are hiding in every shadow, waiting to pounce.

Another benefit of the "Russian bots agitate the American public" stories is they prevent us from asking hard questions about our society. After a flurry of African-American Twitter users alleged a racist double standard in the coverage of the Austin bombings in March (which killed two people, both of them black), how did NPR address these concerns? Did it investigate their underlying merit? Did it do media analysis to see if there was, in fact, a dearth of coverage due to the victims' race? No, it ran a story on how Russia bots were fueling these concerns: "Russian Social Accounts Adding to Complaints That Austin Bombings Aren't Being Covered" (All Things Considered, 3/19/18):

NPR's Philip Ewing: Well, there's two things taking place right now. Some of this is black users on Twitter saying that because some of the victims in this story were not white, this isn't getting as much attention as another story about bombings, or a series of bombings in the United States, would or should, in this view.

NPR: Russian Social Accounts Adding To Complaints That Austin Bombings Aren't Being Covered

This seems like a pretty serious charge, and would have a lot of historical precedent! Does NPR interrogate this thread further? Does it interview any of these "black users"? No, they move on to the dastardly Russians:

Ewing: But there's also additional activity taking place on Twitter which appears initially to be connected with the Russian social media agitation that we've sort of gotten used to since the 2016 presidential race. There are dashboards and online tools that let us know which accounts are focusing on which hashtags from the Russian influence-mongers who've been targeting the United States since 2016, and they, too, have been tweeting about Austin bombings today.

NPR host Ailsa Chang: The theory being that these Russian bots are being used to drive a wedge between groups of people here in the United States about this issue, about the coverage being potentially racist.

Ewing: That's right.

Nothing to see here! There's a problem in our society--systemic racism in American media--and rather than an examination of whether it's affecting coverage here, what the listener gets is yet another boilerplate story about "Russian bots," the degree, scope and impact of which is wholly unknown, and likely inconsequential. Hesitant to cite Hamilton 68 by name (perhaps because its co-founder mocked this very kind of story a few weeks prior), NPR reporter Ewing simply cites "dashboards and online tools" as his source.

Which ones? It doesn't really matter, because "Russian bots support X" reports are a conditioning exercise more than a story. The fact that this paint-by-numbers formula is still being applied weeks after the primary source's co-founder declared himself "not convinced on this bot thing" and called the story "overdone" demonstrates this. The goal is not to convey information or give the reader tools to better understand the world, it's to give the impression all unrest is artificially contrived by a foreign entity, and that the status quo would otherwise be rainbows and sunshine. And to remind us that the Enemy lurks everywhere, and that no one online without a blue checkmark can be trusted.

© 2023 Fairness and Accuracy In Reporting (FAIR)