The Sunlight Foundation this week put up a post claiming that a “shadowy organization with ties to the Koch brothers” dominated the final round of Net Neutrality comments at the FCC.
But scratch the surface of this "study" and you’ll find a deeply flawed analysis of admittedly messy FCC data — one that ignores the voices of hundreds of thousands of Net Neutrality supporters and inflates the impact of the Astroturf opposition.
Sunlight is a valuable organization that has partnered with Free Press on several projects. It's earned its reputation as a trusted and independent open-government watchdog. But this time, it got it wrong.
Sunlight reported that a group called American Commitment was “single-handedly responsible for 56.5% of the comments” sent to the FCC between July 19–Sept. 18. That would be a huge reversal given that Sunlight’s analysis of the first round showed public comments running 99-to-1 against the FCC’s original plan and for real Net Neutrality.
That might be newsworthy — if it were true.
But it’s not. We know for a fact that the Battle for the Net campaign behind September’s massive “Internet slowdown” — which was organized by Fight for the Future, Demand Progress, Engine Advocacy and Free Press — sent more comments to the FCC than American Commitment.
On top of that, groups like ColorOfChange.org, DailyKos and Democracy for America sent in even more. And many more people wrote their own original comments instead of signing petitions or form letters. (Sunlight does acknowledge those comments are overwhelmingly in favor of Net Neutrality.)
Here’s the problem: While Sunlight managed to count up all of American Commitment’s comments, it missed or excluded a huge chunk of comments from Net Neutrality advocates. This appears to be due partly to FCC error — Fight for the Future has found at least 244,000 comments that weren’t processed correctly — or difficulties stemming from how the FCC released the data. Sunlight admits it could account for only about 2.5 million of the 4 million comments the agency actually received.
That’s bad enough. But Sunlight further tilted its study by refusing to count petition signatures that were submitted to the agency and were part of the FCC’s official tally. In other words, Net Neutrality advocates decided to deliver petition signatures in bulk to avoid spamming the FCC with a ton of duplicative letters — and the Sunlight researchers just pretended these signatures didn’t exist.
In a follow-up post published Wednesday, Sunlight researchers insist they weren’t trying to “suggest that signature-only submissions shouldn’t be counted.” But then by literally not counting these submissions, Sunlight skews the results even further.
Moreover, Sunlight’s sloppy take only encourages the sketchy tactics of groups like American Commitment. Prior to the “Sunlight bump,” American Commitment hadn’t been heard from much. It disappeared from the debate after this takedown exposed its spammy, red-baiting tactics.
The organization, which receives funding not just from the Kochs but from the cable lobby, generated signatures by buying mailing lists from right-wing outlets like the Washington Times and RedState. It then sent misleading paid advertisements to “hundreds of thousands of people with subject lines that had nothing to do with net neutrality.” Now the group has resurfaced, and it’s flogging the story to every tech reporter in town claiming to have “won” this round of the debate.
American Commitment didn’t win anything — as Fight for the Future explains clearly here and here. But the bottom line is that Sunlight never should have published such erroneous conclusions in the first place.
In its original post, Sunlight concludes: “Combined with the first round comments, we characterize 41% of the total comments submitted as being anti-net neutrality (with the balance being a mix of pro-NN and comments with no clear opinion), and we estimate that 79% of submissions came as part of form letter campaigns.”
Sunlight says “total comments submitted,” but apparently that means “comments contained in the FCC's XML file that we could figure out how to read and didn’t arbitrarily exclude.” Those two are not the same thing.
If researchers have a data set that is a large but incomplete portion of the total population universe, and they know there is a large missing portion of data that has a completely different distribution from that known set, it is methodologically biased to analyze only the known data and then conclude something about the entire data set. This is called “sample-selection bias.”
You don’t need an advanced degree in statistics to understand the fundamental flaw in Sunlight’s approach here. It’s the equivalent of monitoring the sky from 9 a.m. to 4 p.m. and concluding that the sun never sets.