May 25, 2018
In the weeks since Mark Zuckerberg's testimony to Congress, Facebook has made two important policy announcements. The company released a document explaining what posts and accounts it removes on the basis of its internal rules, known as "community standards," and it engaged outside consultants to review the social media platform's impact on various communities. The company also released its first transparency report on the enforcement of its community standards.
These are all welcome developments, but they lay bare a fundamental question raised by Zuckerberg himself: What obligations does the public want companies to fulfill when deciding which speech deserves a place on the Internet and social media? The Supreme Court recently called the internet and social media platforms "the most important places...for the exchange of views," so the question is not simply an academic exercise.
Facebook's process for deleting posts and accounts based on its community standards has long been opaque, leading to charges from people across the political spectrum that Facebook is effectively shutting down their speech while allowing others to operate unfettered.
From the right, Facebook has been accused of liberal bias, including allegations that the platform manipulated the "Trending Topics" portion of its newsfeed to demote conservative sources in the lead-up to the 2016 election. More recently, conservative critics flamed Facebook for labeling videos from two pro-Trump African-American sisters as "unsafe" - though that may have had more to do with the sisters' habit of promoting conspiracy theories and Holocaust deniers than their support for the president.
In the meantime, progressive voices have demanded that Facebook address concerns about hate speech and harassment on the platform, as well as the censorship of "Black, Arab, Muslim, and other marginalized voices." Anecdotal data suggests that posts by people of color, as well as Muslims, are disproportionately targeted for content takedowns, while white nationalist movements are largely left alone. Facebook has removed posts that accuse white people of complicity in racism or report racial slurs, while leaving up scores of posts and accounts promoting white supremacy and violence against marginalized groups.
Political speech, too, has been suppressed, often under Facebook's rules against broadly defined "terrorist speech." Following a 2016 agreement with Israel to address "incitement," for instance, Facebook suspended the personal accounts of several prominent Palestinian journalists and removed even non-political content from their home publications. The platform also deleted accounts and content from academics, journalists, and local newspapers relating to the conflict in Kashmir, including posts about a separatist killed by the Indian army. Faced with complaints, Facebook characterized the posts as "terrorist content" and said they would be permitted only if they also "condemn[ed] these organisations and/or their violent activities."
These cases demonstrate the complexity of implementation on the ground. Even Facebook's newly published, supposedly bright-line rules raise difficult questions. For example, the platform prohibits posts supporting or praising any individual engaged in "terrorist activity" or "organized hate," even if the support is entirely separate from the activities themselves. Would the company remove a post arguing that Dylann Roof, who went on a deadly shooting spree at an African-American church in Charleston, S.C., did not deserve the death penalty based on human rights grounds? What about posts praising the humanitarian efforts carried out by the Holy Land Foundation, a Muslim charity, despite its conviction for support of Hamas? The published policies do not give adequate answers to these questions.
Moreover, the use of artificial intelligence to screen for terrorist content has produced an outsized focus on Muslims, as described above, and experience shows these types of detection tools are likely to encode, perpetuate, and even mask societal bias. Facebook's push towards using algorithms to flag various types of speech for removal risks baking in biases that will disadvantage minorities and underrepresented communities.
As Facebook continues to refine its policies and practices with the help of information generated by the upcoming company audit, it has an opportunity to improve.
First, Facebook should ensure that it is living up to its stated presumption in favor of allowing speech. The platform should ensure removals are carried out in narrow, targeted ways, and any process for removing posts and accounts should take at least equal account of the value of a robust exchange of ideas, including unpopular ideas. The company should clarify that, while ensuring that individuals and groups can function online free from harassment and hate is a critical goal, it does not intend to follow Zuckerberg's suggestion to Congress that the company would remove all "speech that might make people feel just broadly uncomfortable." Even speech that makes people uncomfortable can serve the public good.
Second, Facebook should take concrete steps to implement its presumption in favor of speech by providing top-notch training to content moderators, including on the value of maintaining the platform as a venue for the open exchange of ideas. The company recently added a much needed appeals mechanism for deletions, and it should be properly resourced. Facebook intends to have 20,000 staff focusing on "safety and security" by the end of the year, and has said publicly that 7,500 of those will be working on both removals and appeals, but the ratio of resources allocated to the two functions is not known. It is critical that the appeals process be taken as seriously as the takedown process.
Third, Facebook should adopt a policy of extreme transparency. While it recently joined its peers in reporting on the number of posts and accounts that it takes down based on its terms of service, numbers alone are not enough. To assure the public that it is applying its policies even-handedly, Facebook should also publish details about the types of posts and accounts it takes down. Its recent transparency report is still a high-level view. For instance, it does not appear to separate out the number of accounts removed from the pieces of content removed, or provide details about the overlap. To demonstrate that the company is abiding by its assurances that its "Dangerous Organizations and Individuals" policy applies equally to all groups engaged in organized violence, it should report not only takedowns of "ISIS and al-Qaeda content" but also takedowns of other groups that qualify (some of which are covered under the category of "hate speech," which it is beginning to report). And it lacks case studies on how some of the most difficult real-life scenarios are likely to play out, which would give users notice as to whether their activity on the platform is likely to lead to suppression of their posts or even entire accounts.
Finally, Facebook should build on its current initiative and develop a system of regular, publicly available audits overseen by a multi-stakeholder group that measure bias and the impact on civil rights and civil liberties.
Facebook is in an unenviable position, with any decision likely to leave some constituency dissatisfied. Regardless of the position it takes on any given issue, as a platform with over a billion users, it must lean far forward in embracing transparency and accountability. The new standards and audits are a welcome first step, but Facebook should expect its users, civil society, and governments around the world to demand more.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
© 2023 Just Security
Faiza Patel
Faiza Patel serves as Co-Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU School of Law. She is also a member of the UN Human Rights Council's Working Group on the Use of Mercenaries. Follow her on Twitter: @FaizaPatelBCJ
Rachel Levinson-Waldman
Rachel Levinson-Waldman serves as Senior Counsel to the Brennan Center for Justice's Liberty and National Security Program, which seeks to advance effective national security policies that respect constitutional values and the rule of law.
In the weeks since Mark Zuckerberg's testimony to Congress, Facebook has made two important policy announcements. The company released a document explaining what posts and accounts it removes on the basis of its internal rules, known as "community standards," and it engaged outside consultants to review the social media platform's impact on various communities. The company also released its first transparency report on the enforcement of its community standards.
These are all welcome developments, but they lay bare a fundamental question raised by Zuckerberg himself: What obligations does the public want companies to fulfill when deciding which speech deserves a place on the Internet and social media? The Supreme Court recently called the internet and social media platforms "the most important places...for the exchange of views," so the question is not simply an academic exercise.
Facebook's process for deleting posts and accounts based on its community standards has long been opaque, leading to charges from people across the political spectrum that Facebook is effectively shutting down their speech while allowing others to operate unfettered.
From the right, Facebook has been accused of liberal bias, including allegations that the platform manipulated the "Trending Topics" portion of its newsfeed to demote conservative sources in the lead-up to the 2016 election. More recently, conservative critics flamed Facebook for labeling videos from two pro-Trump African-American sisters as "unsafe" - though that may have had more to do with the sisters' habit of promoting conspiracy theories and Holocaust deniers than their support for the president.
In the meantime, progressive voices have demanded that Facebook address concerns about hate speech and harassment on the platform, as well as the censorship of "Black, Arab, Muslim, and other marginalized voices." Anecdotal data suggests that posts by people of color, as well as Muslims, are disproportionately targeted for content takedowns, while white nationalist movements are largely left alone. Facebook has removed posts that accuse white people of complicity in racism or report racial slurs, while leaving up scores of posts and accounts promoting white supremacy and violence against marginalized groups.
Political speech, too, has been suppressed, often under Facebook's rules against broadly defined "terrorist speech." Following a 2016 agreement with Israel to address "incitement," for instance, Facebook suspended the personal accounts of several prominent Palestinian journalists and removed even non-political content from their home publications. The platform also deleted accounts and content from academics, journalists, and local newspapers relating to the conflict in Kashmir, including posts about a separatist killed by the Indian army. Faced with complaints, Facebook characterized the posts as "terrorist content" and said they would be permitted only if they also "condemn[ed] these organisations and/or their violent activities."
These cases demonstrate the complexity of implementation on the ground. Even Facebook's newly published, supposedly bright-line rules raise difficult questions. For example, the platform prohibits posts supporting or praising any individual engaged in "terrorist activity" or "organized hate," even if the support is entirely separate from the activities themselves. Would the company remove a post arguing that Dylann Roof, who went on a deadly shooting spree at an African-American church in Charleston, S.C., did not deserve the death penalty based on human rights grounds? What about posts praising the humanitarian efforts carried out by the Holy Land Foundation, a Muslim charity, despite its conviction for support of Hamas? The published policies do not give adequate answers to these questions.
Moreover, the use of artificial intelligence to screen for terrorist content has produced an outsized focus on Muslims, as described above, and experience shows these types of detection tools are likely to encode, perpetuate, and even mask societal bias. Facebook's push towards using algorithms to flag various types of speech for removal risks baking in biases that will disadvantage minorities and underrepresented communities.
As Facebook continues to refine its policies and practices with the help of information generated by the upcoming company audit, it has an opportunity to improve.
First, Facebook should ensure that it is living up to its stated presumption in favor of allowing speech. The platform should ensure removals are carried out in narrow, targeted ways, and any process for removing posts and accounts should take at least equal account of the value of a robust exchange of ideas, including unpopular ideas. The company should clarify that, while ensuring that individuals and groups can function online free from harassment and hate is a critical goal, it does not intend to follow Zuckerberg's suggestion to Congress that the company would remove all "speech that might make people feel just broadly uncomfortable." Even speech that makes people uncomfortable can serve the public good.
Second, Facebook should take concrete steps to implement its presumption in favor of speech by providing top-notch training to content moderators, including on the value of maintaining the platform as a venue for the open exchange of ideas. The company recently added a much needed appeals mechanism for deletions, and it should be properly resourced. Facebook intends to have 20,000 staff focusing on "safety and security" by the end of the year, and has said publicly that 7,500 of those will be working on both removals and appeals, but the ratio of resources allocated to the two functions is not known. It is critical that the appeals process be taken as seriously as the takedown process.
Third, Facebook should adopt a policy of extreme transparency. While it recently joined its peers in reporting on the number of posts and accounts that it takes down based on its terms of service, numbers alone are not enough. To assure the public that it is applying its policies even-handedly, Facebook should also publish details about the types of posts and accounts it takes down. Its recent transparency report is still a high-level view. For instance, it does not appear to separate out the number of accounts removed from the pieces of content removed, or provide details about the overlap. To demonstrate that the company is abiding by its assurances that its "Dangerous Organizations and Individuals" policy applies equally to all groups engaged in organized violence, it should report not only takedowns of "ISIS and al-Qaeda content" but also takedowns of other groups that qualify (some of which are covered under the category of "hate speech," which it is beginning to report). And it lacks case studies on how some of the most difficult real-life scenarios are likely to play out, which would give users notice as to whether their activity on the platform is likely to lead to suppression of their posts or even entire accounts.
Finally, Facebook should build on its current initiative and develop a system of regular, publicly available audits overseen by a multi-stakeholder group that measure bias and the impact on civil rights and civil liberties.
Facebook is in an unenviable position, with any decision likely to leave some constituency dissatisfied. Regardless of the position it takes on any given issue, as a platform with over a billion users, it must lean far forward in embracing transparency and accountability. The new standards and audits are a welcome first step, but Facebook should expect its users, civil society, and governments around the world to demand more.
Faiza Patel
Faiza Patel serves as Co-Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU School of Law. She is also a member of the UN Human Rights Council's Working Group on the Use of Mercenaries. Follow her on Twitter: @FaizaPatelBCJ
Rachel Levinson-Waldman
Rachel Levinson-Waldman serves as Senior Counsel to the Brennan Center for Justice's Liberty and National Security Program, which seeks to advance effective national security policies that respect constitutional values and the rule of law.
In the weeks since Mark Zuckerberg's testimony to Congress, Facebook has made two important policy announcements. The company released a document explaining what posts and accounts it removes on the basis of its internal rules, known as "community standards," and it engaged outside consultants to review the social media platform's impact on various communities. The company also released its first transparency report on the enforcement of its community standards.
These are all welcome developments, but they lay bare a fundamental question raised by Zuckerberg himself: What obligations does the public want companies to fulfill when deciding which speech deserves a place on the Internet and social media? The Supreme Court recently called the internet and social media platforms "the most important places...for the exchange of views," so the question is not simply an academic exercise.
Facebook's process for deleting posts and accounts based on its community standards has long been opaque, leading to charges from people across the political spectrum that Facebook is effectively shutting down their speech while allowing others to operate unfettered.
From the right, Facebook has been accused of liberal bias, including allegations that the platform manipulated the "Trending Topics" portion of its newsfeed to demote conservative sources in the lead-up to the 2016 election. More recently, conservative critics flamed Facebook for labeling videos from two pro-Trump African-American sisters as "unsafe" - though that may have had more to do with the sisters' habit of promoting conspiracy theories and Holocaust deniers than their support for the president.
In the meantime, progressive voices have demanded that Facebook address concerns about hate speech and harassment on the platform, as well as the censorship of "Black, Arab, Muslim, and other marginalized voices." Anecdotal data suggests that posts by people of color, as well as Muslims, are disproportionately targeted for content takedowns, while white nationalist movements are largely left alone. Facebook has removed posts that accuse white people of complicity in racism or report racial slurs, while leaving up scores of posts and accounts promoting white supremacy and violence against marginalized groups.
Political speech, too, has been suppressed, often under Facebook's rules against broadly defined "terrorist speech." Following a 2016 agreement with Israel to address "incitement," for instance, Facebook suspended the personal accounts of several prominent Palestinian journalists and removed even non-political content from their home publications. The platform also deleted accounts and content from academics, journalists, and local newspapers relating to the conflict in Kashmir, including posts about a separatist killed by the Indian army. Faced with complaints, Facebook characterized the posts as "terrorist content" and said they would be permitted only if they also "condemn[ed] these organisations and/or their violent activities."
These cases demonstrate the complexity of implementation on the ground. Even Facebook's newly published, supposedly bright-line rules raise difficult questions. For example, the platform prohibits posts supporting or praising any individual engaged in "terrorist activity" or "organized hate," even if the support is entirely separate from the activities themselves. Would the company remove a post arguing that Dylann Roof, who went on a deadly shooting spree at an African-American church in Charleston, S.C., did not deserve the death penalty based on human rights grounds? What about posts praising the humanitarian efforts carried out by the Holy Land Foundation, a Muslim charity, despite its conviction for support of Hamas? The published policies do not give adequate answers to these questions.
Moreover, the use of artificial intelligence to screen for terrorist content has produced an outsized focus on Muslims, as described above, and experience shows these types of detection tools are likely to encode, perpetuate, and even mask societal bias. Facebook's push towards using algorithms to flag various types of speech for removal risks baking in biases that will disadvantage minorities and underrepresented communities.
As Facebook continues to refine its policies and practices with the help of information generated by the upcoming company audit, it has an opportunity to improve.
First, Facebook should ensure that it is living up to its stated presumption in favor of allowing speech. The platform should ensure removals are carried out in narrow, targeted ways, and any process for removing posts and accounts should take at least equal account of the value of a robust exchange of ideas, including unpopular ideas. The company should clarify that, while ensuring that individuals and groups can function online free from harassment and hate is a critical goal, it does not intend to follow Zuckerberg's suggestion to Congress that the company would remove all "speech that might make people feel just broadly uncomfortable." Even speech that makes people uncomfortable can serve the public good.
Second, Facebook should take concrete steps to implement its presumption in favor of speech by providing top-notch training to content moderators, including on the value of maintaining the platform as a venue for the open exchange of ideas. The company recently added a much needed appeals mechanism for deletions, and it should be properly resourced. Facebook intends to have 20,000 staff focusing on "safety and security" by the end of the year, and has said publicly that 7,500 of those will be working on both removals and appeals, but the ratio of resources allocated to the two functions is not known. It is critical that the appeals process be taken as seriously as the takedown process.
Third, Facebook should adopt a policy of extreme transparency. While it recently joined its peers in reporting on the number of posts and accounts that it takes down based on its terms of service, numbers alone are not enough. To assure the public that it is applying its policies even-handedly, Facebook should also publish details about the types of posts and accounts it takes down. Its recent transparency report is still a high-level view. For instance, it does not appear to separate out the number of accounts removed from the pieces of content removed, or provide details about the overlap. To demonstrate that the company is abiding by its assurances that its "Dangerous Organizations and Individuals" policy applies equally to all groups engaged in organized violence, it should report not only takedowns of "ISIS and al-Qaeda content" but also takedowns of other groups that qualify (some of which are covered under the category of "hate speech," which it is beginning to report). And it lacks case studies on how some of the most difficult real-life scenarios are likely to play out, which would give users notice as to whether their activity on the platform is likely to lead to suppression of their posts or even entire accounts.
Finally, Facebook should build on its current initiative and develop a system of regular, publicly available audits overseen by a multi-stakeholder group that measure bias and the impact on civil rights and civil liberties.
Facebook is in an unenviable position, with any decision likely to leave some constituency dissatisfied. Regardless of the position it takes on any given issue, as a platform with over a billion users, it must lean far forward in embracing transparency and accountability. The new standards and audits are a welcome first step, but Facebook should expect its users, civil society, and governments around the world to demand more.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.