Published on
by

Civil Liberties Groups Warn Proposed EU 'Terrorist Content' Rule a Threat to Democratic Values

Requiring filtering tools would be "a gamble with European Internet users' rights to privacy and data protection, freedom of expression and information, and non-discrimination and equality before the law."

EDRi graphic

Human rights advocates are raising alarm about a proposed European Union regulation that critics worry would undermine democratic values on a global scale. (Graphic: EDRi/Twitter)

Dozens of human rights groups and academics have signed on to an open letter (pdf) raising alarm about the European Union's proposed Regulation on Preventing the Dissemination of Terrorist Content Online, warning that its call for Internet hosts to employ "proactive measures" to censor such content "will almost certainly lead platforms to adopt poorly understood tools" at the expense of democratic values across the globe.

One of those tools is the Hash Database developed by Facebook, YouTube, Microsoft, and Twitter. The 13 companies that use the database—which supposedly contains 80,000 images and videos—can automatically filter out material deemed "extreme" terrorist content. However, as the letter explains, "almost nothing is publicly known about the specific content that platforms block using the database, or about companies' internal processes or error rates, and there is insufficient clarity around the participating companies' definitions of 'terrorist content.'"

"Countering terrorist violence is a shared priority, and our point is not to question the good intentions of the database operators. But lawmakers and the public have no meaningful information about how well the database or any other existing filtering tool serves this goal, and at what cost to democratic values and individual human rights," notes the letter, whose signatories include the American Civil Liberties Union (ACLU), the Brennan Center for Justice, the Electronic Frontier Foundation (EFF), and the European Digital Rights (EDRi).

As an EDRi statement outlines, among the groups' main concerns are the following:

  • Lack of transparency of how the database works, and its effectiveness, proportionality, and appropriateness to achieve the goals the Terrorist Content Regulation aims to achieve;
  • How filters are unable to understand the context and therefore they are error-prone; and
  • Regardless of the possibility of filters to be accurate in the future, the pervasive online monitoring on disadvantaged and marginalized individuals.

Given the uncertainties over the effectiveness and societal costs of such tools, the letter charges that "requiring all platforms to use black-box tools like the database would be a gamble with European Internet users' rights to privacy and data protection, freedom of expression and information, and non-discrimination and equality before the law."

With those fundamental rights under threat, the groups are calling on members of the European Parliament "to reject proactive filtering obligations; provide sound, peer-reviewed research data supporting policy recommendations and legal mandates around counter-terrorism; and refrain from enacting laws that will drive Internet platforms to adopt untested and poorly understood technologies to restrict online expression."

Read the full letter:

Dear Members of the European Parliament,

The undersigned organizations write to share our concerns about the EU’s proposed Regulation on Preventing the Dissemination of Terrorist Content Online, and in particular the Regulation’s call for Internet hosts to use “proactive measures” to detect terrorist content. We are concerned that if this Regulation is adopted, it will almost certainly lead platforms to adopt poorly understood tools, such as the Hash Database referenced in the Explanatory Memorandum to the Regulation and currently overseen by the Global Internet Forum to Counter Terrorism. Countering terrorist violence is a shared priority, and our point is not to question the good intentions of the Database operators. But lawmakers and the public have no meaningful information about how well the Database or any other existing filtering tool serves this goal, and at what cost to democratic values and individual human rights. We urge you to reject proactive filtering obligations; provide sound, peer-reviewed research data supporting policy recommendations and legal mandates around counter-terrorism; and refrain from enacting laws that will drive Internet platforms to adopt untested and poorly understood technologies to restrict online expression.

The Database was initially developed by Facebook, YouTube, Microsoft, and Twitter as a voluntary measure, and announced to the public in 2016. It contains digital hash “fingerprints” of images and4videos that platforms have identified as “extreme” terrorist material, based not on the law but on their own Community Guidelines or Terms of Service. The platforms can use automated filtering tools to identify and remove duplicates of the hashed images or videos. As of 2018, the Database was said to contain hashes representing over 80,000 images or videos. At least thirteen companies now use the Database, and some seventy companies have reportedly discussed adopting it.

Almost nothing is publicly known about the specific content that platforms block using the Database, or about companies’ internal processes or error rates, and there is insufficient clarity around the participating companies’ definitions of “terrorist content.” Furthermore, there are no reports about how many legal processes or investigations were opened after the content was blocked. This data would be crucial to understand to what extent the measures are effective and necessary in a democratic society, which are some of the sine qua non requisites for restrictions of fundamental rights. We do know, however, of conspicuous problems that seemingly result from content filtering gone awry. The Syrian Archive, a civil society organization preserving evidence of human rights abuses in Syria, for example, reports that YouTube deleted over 100,000 of its videos. Videos and other content which may be used in one context to advocate terrorist violence may be essential elsewhere for news reporting, combating terrorist recruitment online, or scholarship. Technical filters are blind to these contextual differences. As three United Nations special rapporteurs noted in a December 2018 letter, this problem raises serious concerns about free expression rights under the proposed Regulation. It is far from clear whether major platforms like YouTube or Facebook adequately correct for this through employees’ review of filtering decisions—and it seems highly unlikely that smaller platforms could even attempt to do so, if required to use the Database or other filtering tools.

Failures of this sort seriously threaten Internet users’ rights to seek and impart information. The pervasive monitoring that platforms carry out in order to filter users’ communications also threatens privacy and data protection rights. Moreover, these harms do not appear to be equally distributed, but instead disproportionately disadvantage individual Internet users based on their ethnic background, religion, language, or location—in other words, harms fall on users who might already be marginalized. More extensive use of the Database and other automated filtering tools will amplify the risk of harms to users whose messages and communications about matters of urgent public concern may be wrongly removed by platforms. The United Nations Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism has expressed concern about this lack of clarity, and said that Facebook’s rules for classifying organizations as terrorist are “at odds with international humanitarian law”.

Due to the opacity of the Database’s operations, it is impossible to assess the consequences of its nearly two years of operation. The European public is being asked to rely on claims by platforms or vendors about the efficacy of the Database and similar tools—or else to assume that any current problems will be solved by hypothetical future technologies or untested, post-removal appeal mechanisms. Such optimistic assumptions cannot be justified given the serious problems researchers have found with the few filtering tools available for independent review. Requiring all platforms to use black-box tools like the Database would be a gamble with European Internet users’ rights to privacy and data protection, freedom of expression and information, and non-discrimination and equality before the law. That gamble is neither necessary nor proportionate as an exercise of state power.

EU institutions’ embrace of the database and other filtering tools will also have serious consequences for Internet users all over the world, including in countries where various of the undersigned organizations work to protect human rights. For one thing, when platforms filter a video or image in response to a European authority’s request, it will likely disappear for users everywhere—even if it is part of critical news reporting or political discourse in other parts of the world. For another, encoding proactive measures to filter and remove content in an EU regulation gives authoritarian and authoritarian-leaning regimes the cover they need to justify their own vaguely worded and arbitrarily applied anti-terrorism legislation. Platforms that have already developed content filtering capabilities in order to comply with EU laws will find it difficult to resist demands to use them in other regions and under other laws, to the detriment of vulnerable Internet users around the globe. Your decisions in this area will have global consequences.

Signatories:
Access Now; Africa Freedom of Information Centre; Agustina Del Campo, in an individual capacity (Center for Studies on Freedom of Expression CELE); American Civil Liberties Union (ACLU); ApTI Romania; Article 19; Bits of Freedom; Brennan Center for Justice; Catalina Botero Marino, in an individual capacity (Former Special Rapporteur of Freedom of Expression of the Organization of American States; Center for Democracy & Technology (CDT); Centre for Internet and Society; Chinmayi Arun, in an individual capacity; Damian Loreti, in an individual capacity; Daphne Keller, in an individual capacity (Stanford CIS); Derechos Digitales · América Latina; Digital Rights Watch; Electronic Frontier Finland; Electronic Frontier Foundation (EFF); Electronic Frontier Norway; Elena Sherstoboeva, in an individual capacity (Higher School of Economics); European Digital Rights (EDRi); Hermes Center; Hiperderecho; Homo Digitalis; IT-Pol; Joan Barata, in an individual capacity (Stanford CIS); Krisztina Rozgonyi, in an individual capacity (University of Vienna); Open Rights Group; Open Technology Institute at New America; Ossigeno; Pacific Islands News Association (PINA); People Over Politics; Prostasia Foundation; R3D: Red en Defensa de los Derechos Digitales; Sarah T. Roberts, Ph.D., in an individual capacity; Southeast Asian Press Alliance; Social Media Exchange (SMEX), Lebanon; WITNESS; and Xnet.

This is the world we live in. This is the world we cover.

Because of people like you, another world is possible. There are many battles to be won, but we will battle them together—all of us. Common Dreams is not your normal news site. We don't survive on clicks. We don't want advertising dollars. We want the world to be a better place. But we can't do it alone. It doesn't work that way. We need you. If you can help today—because every gift of every size matters—please do. Without Your Support We Won't Exist.

Please select a donation method:



Share This Article