Aug 13, 2018
We're finally having a public debate about the big Internet platforms policing content and suspending accounts. But it's a serious mistake to frame the debate about content moderation around right-wing conspiracy theorist Alex Jones's Infowars and not around the thousands of other moderation decisions that have been made by such online giants as Apple, Facebook, Google-owned YouTube and Spotify.
Internet companies have removed millions of posts and images over the past decade and suspended hundreds, perhaps thousands, of user accounts. These silenced voices span the political spectrum and the globe: Moroccan atheists, women discussing online harassment, ads featuring crucifixes, black and Muslim activists reposting racist messages they received, trans models, drag performers, indigenous women, childbirth images, photos of breast-feeding. Platforms have taken down documentation of war crimes in Syria, Myanmar and Kashmir, arrests in North Dakota and police brutality across the United States.
We should be extremely careful before rushing to embrace an Internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulated to censorship demands from authoritarian regimes, and powerful actors have manipulated flagging procedures to effectively censor their political opponents. Given this practical reality, and the sad history of political censorship in the United States, let's not cheer one decision that we might agree with.
Moderation systems operate in secret with no public accountability
Even beyond content moderation's vulnerability to censorship, the moderating process itself, whether undertaken by humans or, increasingly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplace, and rules are applied unevenly. Company executives regularly reshape their rules in response to governmental and other pressure, and they do so without significant input from the public. Ambiguous "community standards" result in the removal of some content deemed to have violated the rules, while content that seems equally offensive is okay.
Moderation systems operate in secret with no public accountability. We're pretty sure we know why Infowars was suspended , but in many cases the decisions are far more opaque. And unlike Jones, few of the others have powerful multimedia operations with an audience in the millions that will simply move to using an app when other venues become unavailable, as is happening with Infowars. Facebook, Twitter, Apple and other companies routinely silence voices in marginalized communities around the world that struggle to be heard in the first place, replicating their offline repression.
The palpable public momentum for private companies to do something to more actively moderate the content that appears on their sites is worrisome. And it's worrisome even though I share the concerns -- about disinformation, extremism, harassment -- motivating it.
We must also not lose sight of the fact that while the dominant social media platforms are largely based in the United States, the majority of their users are based outside the country, and the policies the platforms adopt here have outsize influence elsewhere. If content moderation is here to stay, existing human- rights standards provide a framework for policies that companies could and should voluntarily implement to protect their users.
There have been several worthy efforts to articulate a human-rights framing for content moderation. One framework, which the organization where I work, the Electronic Frontier Foundation, played a part in formulating, is found in the Santa Clara Principles . These principles advance three key goals: numbers (companies should publish the number of posts removed and accounts suspended); notice (companies should provide notice and an explanation to each user whose content is removed); and appeal (companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension).
David Kaye, the special rapporteur for the United Nations on the promotion and protection of the right to free expression, recommended in a recent report that private companies -- and governments -- should as a routine matter consider the impact that content moderation policies have on human rights. He recommends that governments not pressure private companies to implement policies that interfere with people's right to free expression online.
The power that these platforms have over the online public sphere should worry all of us, no matter whether we agree or disagree with a given content decision. A decision by any one of them has a huge effect. Even worse, if other companies move in lock step, a speaker may effectively be forced offline.
Transparency in these companies' content-moderation decisions is essential. We must demand that they apply their rules consistently and provide clear, accessible avenues for meaningful appeal.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
© 2023 Washington Post
We're finally having a public debate about the big Internet platforms policing content and suspending accounts. But it's a serious mistake to frame the debate about content moderation around right-wing conspiracy theorist Alex Jones's Infowars and not around the thousands of other moderation decisions that have been made by such online giants as Apple, Facebook, Google-owned YouTube and Spotify.
Internet companies have removed millions of posts and images over the past decade and suspended hundreds, perhaps thousands, of user accounts. These silenced voices span the political spectrum and the globe: Moroccan atheists, women discussing online harassment, ads featuring crucifixes, black and Muslim activists reposting racist messages they received, trans models, drag performers, indigenous women, childbirth images, photos of breast-feeding. Platforms have taken down documentation of war crimes in Syria, Myanmar and Kashmir, arrests in North Dakota and police brutality across the United States.
We should be extremely careful before rushing to embrace an Internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulated to censorship demands from authoritarian regimes, and powerful actors have manipulated flagging procedures to effectively censor their political opponents. Given this practical reality, and the sad history of political censorship in the United States, let's not cheer one decision that we might agree with.
Moderation systems operate in secret with no public accountability
Even beyond content moderation's vulnerability to censorship, the moderating process itself, whether undertaken by humans or, increasingly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplace, and rules are applied unevenly. Company executives regularly reshape their rules in response to governmental and other pressure, and they do so without significant input from the public. Ambiguous "community standards" result in the removal of some content deemed to have violated the rules, while content that seems equally offensive is okay.
Moderation systems operate in secret with no public accountability. We're pretty sure we know why Infowars was suspended , but in many cases the decisions are far more opaque. And unlike Jones, few of the others have powerful multimedia operations with an audience in the millions that will simply move to using an app when other venues become unavailable, as is happening with Infowars. Facebook, Twitter, Apple and other companies routinely silence voices in marginalized communities around the world that struggle to be heard in the first place, replicating their offline repression.
The palpable public momentum for private companies to do something to more actively moderate the content that appears on their sites is worrisome. And it's worrisome even though I share the concerns -- about disinformation, extremism, harassment -- motivating it.
We must also not lose sight of the fact that while the dominant social media platforms are largely based in the United States, the majority of their users are based outside the country, and the policies the platforms adopt here have outsize influence elsewhere. If content moderation is here to stay, existing human- rights standards provide a framework for policies that companies could and should voluntarily implement to protect their users.
There have been several worthy efforts to articulate a human-rights framing for content moderation. One framework, which the organization where I work, the Electronic Frontier Foundation, played a part in formulating, is found in the Santa Clara Principles . These principles advance three key goals: numbers (companies should publish the number of posts removed and accounts suspended); notice (companies should provide notice and an explanation to each user whose content is removed); and appeal (companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension).
David Kaye, the special rapporteur for the United Nations on the promotion and protection of the right to free expression, recommended in a recent report that private companies -- and governments -- should as a routine matter consider the impact that content moderation policies have on human rights. He recommends that governments not pressure private companies to implement policies that interfere with people's right to free expression online.
The power that these platforms have over the online public sphere should worry all of us, no matter whether we agree or disagree with a given content decision. A decision by any one of them has a huge effect. Even worse, if other companies move in lock step, a speaker may effectively be forced offline.
Transparency in these companies' content-moderation decisions is essential. We must demand that they apply their rules consistently and provide clear, accessible avenues for meaningful appeal.
We're finally having a public debate about the big Internet platforms policing content and suspending accounts. But it's a serious mistake to frame the debate about content moderation around right-wing conspiracy theorist Alex Jones's Infowars and not around the thousands of other moderation decisions that have been made by such online giants as Apple, Facebook, Google-owned YouTube and Spotify.
Internet companies have removed millions of posts and images over the past decade and suspended hundreds, perhaps thousands, of user accounts. These silenced voices span the political spectrum and the globe: Moroccan atheists, women discussing online harassment, ads featuring crucifixes, black and Muslim activists reposting racist messages they received, trans models, drag performers, indigenous women, childbirth images, photos of breast-feeding. Platforms have taken down documentation of war crimes in Syria, Myanmar and Kashmir, arrests in North Dakota and police brutality across the United States.
We should be extremely careful before rushing to embrace an Internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulated to censorship demands from authoritarian regimes, and powerful actors have manipulated flagging procedures to effectively censor their political opponents. Given this practical reality, and the sad history of political censorship in the United States, let's not cheer one decision that we might agree with.
Moderation systems operate in secret with no public accountability
Even beyond content moderation's vulnerability to censorship, the moderating process itself, whether undertaken by humans or, increasingly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplace, and rules are applied unevenly. Company executives regularly reshape their rules in response to governmental and other pressure, and they do so without significant input from the public. Ambiguous "community standards" result in the removal of some content deemed to have violated the rules, while content that seems equally offensive is okay.
Moderation systems operate in secret with no public accountability. We're pretty sure we know why Infowars was suspended , but in many cases the decisions are far more opaque. And unlike Jones, few of the others have powerful multimedia operations with an audience in the millions that will simply move to using an app when other venues become unavailable, as is happening with Infowars. Facebook, Twitter, Apple and other companies routinely silence voices in marginalized communities around the world that struggle to be heard in the first place, replicating their offline repression.
The palpable public momentum for private companies to do something to more actively moderate the content that appears on their sites is worrisome. And it's worrisome even though I share the concerns -- about disinformation, extremism, harassment -- motivating it.
We must also not lose sight of the fact that while the dominant social media platforms are largely based in the United States, the majority of their users are based outside the country, and the policies the platforms adopt here have outsize influence elsewhere. If content moderation is here to stay, existing human- rights standards provide a framework for policies that companies could and should voluntarily implement to protect their users.
There have been several worthy efforts to articulate a human-rights framing for content moderation. One framework, which the organization where I work, the Electronic Frontier Foundation, played a part in formulating, is found in the Santa Clara Principles . These principles advance three key goals: numbers (companies should publish the number of posts removed and accounts suspended); notice (companies should provide notice and an explanation to each user whose content is removed); and appeal (companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension).
David Kaye, the special rapporteur for the United Nations on the promotion and protection of the right to free expression, recommended in a recent report that private companies -- and governments -- should as a routine matter consider the impact that content moderation policies have on human rights. He recommends that governments not pressure private companies to implement policies that interfere with people's right to free expression online.
The power that these platforms have over the online public sphere should worry all of us, no matter whether we agree or disagree with a given content decision. A decision by any one of them has a huge effect. Even worse, if other companies move in lock step, a speaker may effectively be forced offline.
Transparency in these companies' content-moderation decisions is essential. We must demand that they apply their rules consistently and provide clear, accessible avenues for meaningful appeal.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.