SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about how much the Obama administration values the companies' cooperation, the White House on Friday dispatched the highest echelon of its national security team -- including the attorney general, the FBI director, the director of national intelligence, and the NSA director -- to Silicon Valley for a meeting with technology executives chaired by the White House chief of staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, "How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?"
Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the president to submit a report on "United States strategy to combat terrorists' and terrorist organizations' use of social media." The Senate is considering a far more aggressive measure, which would require providers of Internet communications services to report to government authorities when they have "actual knowledge" of "apparent" terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).
The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.
The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use -- or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.
Nor are employees of social media companies well-positioned to analyze and interpret content for potential links to terrorism. It's an open question whether anyone inside or outside the government can accurately and consistently distinguish users inciting terrorism from those who are observing it or reporting on it, but placing the onus for doing so on the technology companies themselves is folly.
That leaves existing content-flagging mechanisms, which rely on users to identify content that violates the companies' rules or terms of service. Those rules vary: Twitter, for instance, prohibits users from threatening or promoting terrorism, and it recently announced an expanded prohibition on "hateful conduct." The Google/YouTube content rules are generally similar to Twitter's. Facebook's Community Standards, on the other hand, outright prohibit expressions of support for "dangerous organizations" and ban "[s]upporting or praising leaders of those same organizations, or condoning their violent activities."
Even the narrowest of these content restrictions are inherently subjective and context-dependent, raising the likelihood of arbitrary, uneven enforcement. They also render minorities or those expressing unpopular views more vulnerable to reporting -- or to manipulation of the reporting mechanism. That was the case, for example, when Facebook suspended the accounts of pro-Western Ukrainians accused of hate speech in a coordinated takedown campaign by multiple Russian-speaking users. In that way, users banding together can try to leverage the flagging tools for what amounts to viewpoint discrimination.
Content-flagging tools are proving irresistible to governments, which can use them to pressure social media companies to take down material that the governments themselves could not censor or that the governments simply find offensive. It's not clear how often government agencies request that social media companies remove content that may violate their terms of service (as opposed to violating local law), because the companies don't include those requests in their transparency reports (see, for example, here and here).
However, the United Kingdom and European Union have established teams dedicated to flagging social media content for removal. In the U.K., the Counter Terrorism Internet Referral Unit (CTIRU) was using content flagging to engineer the takedown of 1,000 pieces of content per week as of June 2015. Since its inception in 2010, the unit has prompted the removal of over 120,000 pieces of online content. YouTube has granted its invitation-only "Trusted Flagger" status to the CTIRU, enabling the unit to flag large streams of content for immediate action. The European Union has launched a comparable Internet Referral Unit, which began operations in December 2015. Content that companies take down through this process is inaccessible everywhere, meaning that a single government can try to use the process to impose its more restrictive speech standards on the rest of the world.
Lurking beneath these kinds of content restrictions is the perennial question of what constitutes terrorism or the promotion of terrorism -- a question which has no clear or consistent answer in U.S. or international law, and which inevitably is subject to politics or chauvinistic impulses, and even manipulation.
In short, the considerations that counsel against pressuring or requiring social media companies to limit content on their platforms are the same considerations that animate the First Amendment. The Constitution does not prevent social media companies from choosing to limit content on their own platforms, but free speech is a value, not just a constitutional right. As speech has increasingly migrated online, today's soapboxes are primarily digital. The free flow of information fosters issue literacy and leads to the kind of critical thought necessary for the rejection of racist or violence-inducing narratives. And limiting or censoring speech only puts it out of sight, not out of the minds of those who speak it.
Ultimately, censorship makes censored speech all the more dangerous, because we lose our most powerful tool in combatting ideas with which we disagree: the ability to identify them and respond with better ideas.
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about how much the Obama administration values the companies' cooperation, the White House on Friday dispatched the highest echelon of its national security team -- including the attorney general, the FBI director, the director of national intelligence, and the NSA director -- to Silicon Valley for a meeting with technology executives chaired by the White House chief of staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, "How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?"
Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the president to submit a report on "United States strategy to combat terrorists' and terrorist organizations' use of social media." The Senate is considering a far more aggressive measure, which would require providers of Internet communications services to report to government authorities when they have "actual knowledge" of "apparent" terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).
The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.
The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use -- or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.
Nor are employees of social media companies well-positioned to analyze and interpret content for potential links to terrorism. It's an open question whether anyone inside or outside the government can accurately and consistently distinguish users inciting terrorism from those who are observing it or reporting on it, but placing the onus for doing so on the technology companies themselves is folly.
That leaves existing content-flagging mechanisms, which rely on users to identify content that violates the companies' rules or terms of service. Those rules vary: Twitter, for instance, prohibits users from threatening or promoting terrorism, and it recently announced an expanded prohibition on "hateful conduct." The Google/YouTube content rules are generally similar to Twitter's. Facebook's Community Standards, on the other hand, outright prohibit expressions of support for "dangerous organizations" and ban "[s]upporting or praising leaders of those same organizations, or condoning their violent activities."
Even the narrowest of these content restrictions are inherently subjective and context-dependent, raising the likelihood of arbitrary, uneven enforcement. They also render minorities or those expressing unpopular views more vulnerable to reporting -- or to manipulation of the reporting mechanism. That was the case, for example, when Facebook suspended the accounts of pro-Western Ukrainians accused of hate speech in a coordinated takedown campaign by multiple Russian-speaking users. In that way, users banding together can try to leverage the flagging tools for what amounts to viewpoint discrimination.
Content-flagging tools are proving irresistible to governments, which can use them to pressure social media companies to take down material that the governments themselves could not censor or that the governments simply find offensive. It's not clear how often government agencies request that social media companies remove content that may violate their terms of service (as opposed to violating local law), because the companies don't include those requests in their transparency reports (see, for example, here and here).
However, the United Kingdom and European Union have established teams dedicated to flagging social media content for removal. In the U.K., the Counter Terrorism Internet Referral Unit (CTIRU) was using content flagging to engineer the takedown of 1,000 pieces of content per week as of June 2015. Since its inception in 2010, the unit has prompted the removal of over 120,000 pieces of online content. YouTube has granted its invitation-only "Trusted Flagger" status to the CTIRU, enabling the unit to flag large streams of content for immediate action. The European Union has launched a comparable Internet Referral Unit, which began operations in December 2015. Content that companies take down through this process is inaccessible everywhere, meaning that a single government can try to use the process to impose its more restrictive speech standards on the rest of the world.
Lurking beneath these kinds of content restrictions is the perennial question of what constitutes terrorism or the promotion of terrorism -- a question which has no clear or consistent answer in U.S. or international law, and which inevitably is subject to politics or chauvinistic impulses, and even manipulation.
In short, the considerations that counsel against pressuring or requiring social media companies to limit content on their platforms are the same considerations that animate the First Amendment. The Constitution does not prevent social media companies from choosing to limit content on their own platforms, but free speech is a value, not just a constitutional right. As speech has increasingly migrated online, today's soapboxes are primarily digital. The free flow of information fosters issue literacy and leads to the kind of critical thought necessary for the rejection of racist or violence-inducing narratives. And limiting or censoring speech only puts it out of sight, not out of the minds of those who speak it.
Ultimately, censorship makes censored speech all the more dangerous, because we lose our most powerful tool in combatting ideas with which we disagree: the ability to identify them and respond with better ideas.
The pressure on social media companies to limit or take down content in the name of national security has never been greater. Resolving any ambiguity about how much the Obama administration values the companies' cooperation, the White House on Friday dispatched the highest echelon of its national security team -- including the attorney general, the FBI director, the director of national intelligence, and the NSA director -- to Silicon Valley for a meeting with technology executives chaired by the White House chief of staff himself. The agenda for the meeting tried to convey a locked-arms sense of camaraderie, asking, "How can we make it harder for terrorists to leveraging [sic] the internet to recruit, radicalize, and mobilize followers to violence?"
Congress, too, has been turning up the heat. On December 16, the House passed the Combat Terrorist Use of Social Media Act, which would require the president to submit a report on "United States strategy to combat terrorists' and terrorist organizations' use of social media." The Senate is considering a far more aggressive measure, which would require providers of Internet communications services to report to government authorities when they have "actual knowledge" of "apparent" terrorist activity (a requirement that, because of its vagueness and breadth, would likely harm user privacy and lead to over-reporting).
The government is of course right that terrorists use social media, including to recruit others to their cause. Indeed, social media companies already have systems in place for catching real threats, incitement, or actual terrorism. But the notion that social media companies can or should scrub their platforms of all potentially terrorism-related content is both unrealistic and misguided. In fact, mandating affirmative monitoring beyond existing practices would sweep in protected speech and turn the social media companies into a wing of the national security state.
The reasons not to take that route are both practical and principled. On a technical level, it would be extremely difficult, if not entirely infeasible, to screen for actual terrorism-related content in the 500 million tweets that are generated each day, or the more than 400 hours of video uploaded to YouTube each minute, or the 300 million daily photo uploads on Facebook. Nor is it clear what terms or keywords any automated screening tools would use -- or how using such terms could possibly exclude beliefs and expressive activity that are perfectly legal and non-violent, but that would be deeply chilled if monitored for potential links to terrorism.
Nor are employees of social media companies well-positioned to analyze and interpret content for potential links to terrorism. It's an open question whether anyone inside or outside the government can accurately and consistently distinguish users inciting terrorism from those who are observing it or reporting on it, but placing the onus for doing so on the technology companies themselves is folly.
That leaves existing content-flagging mechanisms, which rely on users to identify content that violates the companies' rules or terms of service. Those rules vary: Twitter, for instance, prohibits users from threatening or promoting terrorism, and it recently announced an expanded prohibition on "hateful conduct." The Google/YouTube content rules are generally similar to Twitter's. Facebook's Community Standards, on the other hand, outright prohibit expressions of support for "dangerous organizations" and ban "[s]upporting or praising leaders of those same organizations, or condoning their violent activities."
Even the narrowest of these content restrictions are inherently subjective and context-dependent, raising the likelihood of arbitrary, uneven enforcement. They also render minorities or those expressing unpopular views more vulnerable to reporting -- or to manipulation of the reporting mechanism. That was the case, for example, when Facebook suspended the accounts of pro-Western Ukrainians accused of hate speech in a coordinated takedown campaign by multiple Russian-speaking users. In that way, users banding together can try to leverage the flagging tools for what amounts to viewpoint discrimination.
Content-flagging tools are proving irresistible to governments, which can use them to pressure social media companies to take down material that the governments themselves could not censor or that the governments simply find offensive. It's not clear how often government agencies request that social media companies remove content that may violate their terms of service (as opposed to violating local law), because the companies don't include those requests in their transparency reports (see, for example, here and here).
However, the United Kingdom and European Union have established teams dedicated to flagging social media content for removal. In the U.K., the Counter Terrorism Internet Referral Unit (CTIRU) was using content flagging to engineer the takedown of 1,000 pieces of content per week as of June 2015. Since its inception in 2010, the unit has prompted the removal of over 120,000 pieces of online content. YouTube has granted its invitation-only "Trusted Flagger" status to the CTIRU, enabling the unit to flag large streams of content for immediate action. The European Union has launched a comparable Internet Referral Unit, which began operations in December 2015. Content that companies take down through this process is inaccessible everywhere, meaning that a single government can try to use the process to impose its more restrictive speech standards on the rest of the world.
Lurking beneath these kinds of content restrictions is the perennial question of what constitutes terrorism or the promotion of terrorism -- a question which has no clear or consistent answer in U.S. or international law, and which inevitably is subject to politics or chauvinistic impulses, and even manipulation.
In short, the considerations that counsel against pressuring or requiring social media companies to limit content on their platforms are the same considerations that animate the First Amendment. The Constitution does not prevent social media companies from choosing to limit content on their own platforms, but free speech is a value, not just a constitutional right. As speech has increasingly migrated online, today's soapboxes are primarily digital. The free flow of information fosters issue literacy and leads to the kind of critical thought necessary for the rejection of racist or violence-inducing narratives. And limiting or censoring speech only puts it out of sight, not out of the minds of those who speak it.
Ultimately, censorship makes censored speech all the more dangerous, because we lose our most powerful tool in combatting ideas with which we disagree: the ability to identify them and respond with better ideas.