Oct 28, 2020
I confess: I still have a Facebook account. This might seem unusual for someone who works on curbing the spread of hate speech online.
Much of my feed consists of puppies, babies and posts from my local Buy Nothing Group. Your Facebook feed probably looks a lot different than mine. We have different friends, different likes, different group affiliations and different interests. As a result, Facebook's algorithms deliver an experience unique to each of us, specifically designed to keep us hooked on the platform. On Facebook, none of us share the same reality.
Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
Just because some of us don't see hateful posts in our newsfeeds doesn't diminish the impact they have on those who do. Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
The proliferation of divisive and hateful content is baked into Facebook's business model: The platform profits from finely targeting ads to users who are most likely to respond to them.
Facebook's own research shows that divisive content more deeply engages users. And a Facebook executive recently revealed that "right-wing populism is always more engaging" than centrist or progressive fare, with content that touches on topics such as "nation, protection, the other, anger, [and] fear."
The guidelines of what is acceptable or banned on the platform are sprawled across a confusing labyrinth of pages that make up Facebook's humongous corporate site. You'll need to look at all of the Community Standards, Terms of Service, Newsroom and Help Center posts to begin to get a clear picture. And just when you think you've figured it out, Facebook announces another policy change or tweaks its standards.
By intentionally seeding this patchwork, Facebook creates the illusion of transparency. This tangle of conflicting policies and webpages -- in addition to the patchwork of internal guidelines to content moderators -- allows Facebook to systematically break its promise to keep people safe, allowing white supremacists and hate groups to use the platform to spread brutal ideologies, fundraise and organize events that incite violence.
We saw a tragic example of this just recently, when the event page of right-wing paramilitary group the Kenosha Guard invited its members to arrive armed at a racial-justice protest in Wisconsin. The event was flagged 455 times in a day and reviewed by four human moderators who determined it did not violate Facebook's policies. Facebook took notice only after a 17-year-old crossed state lines and killed two people protesting the police shooting of James Blake. Mark Zuckerberg called Facebook's failure to remove the page an "operational mistake."
These types of "mistakes" are inexcusable -- especially since the organization Muslim Advocates has been sounding alarms about event-page abuses since 2015. Facebook repeatedly ignored the group's prescient warnings and repeatedly failed to take down pages promoting hateful rallies. And even in the wake of Kenosha, Facebook has yet to modify its events policies to clearly indicate that calls to arms are prohibited.
In an attempt to placate critics, Facebook has announced a slew of policy changes in recent weeks. These include a ban on ads that "praise, support or represent militarized social movements," the addition of context to posts when a candidate or party prematurely declares victory, a ban on ads that delegitimize the election's outcome, and a ban on political ads after Election Day until a winner is officially declared.
Facebook also banned all pages by QAnon and other violent groups across its platforms (but not the content itself). The company also updated its hate-speech policy, adding Holocaust denial to the list of prohibited content. A New York Times opinion piece by Charlie Warzel noted that these changes are a "tacit admission that what is good for Facebook is, on the whole, destabilizing for society
Civil-rights and racial-justice organizations, scholars, activists and journalists have long pressured Facebook to make these kinds of changes -- and many others. And for its part, Facebook is spinning these updates as "progress," even though they came too late -- a month before the presidential election -- to repair the significant damage inflicted throughout 2020.
Clear action steps to mitigate hate online have been available to Facebook for years. In 2018, a coalition of groups, including Free Press, crafted Change the Terms -- a set of model policies to curb hateful activities on online platforms. Among the many recommendations is that platforms enforce policies in a transparent, equitable and culturally relevant way.
Despite its recent changes and enforcement actions, Facebook still has a long way to go to meaningfully address its failure to protect our communities and our democracy. People's lives are on the line.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
I confess: I still have a Facebook account. This might seem unusual for someone who works on curbing the spread of hate speech online.
Much of my feed consists of puppies, babies and posts from my local Buy Nothing Group. Your Facebook feed probably looks a lot different than mine. We have different friends, different likes, different group affiliations and different interests. As a result, Facebook's algorithms deliver an experience unique to each of us, specifically designed to keep us hooked on the platform. On Facebook, none of us share the same reality.
Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
Just because some of us don't see hateful posts in our newsfeeds doesn't diminish the impact they have on those who do. Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
The proliferation of divisive and hateful content is baked into Facebook's business model: The platform profits from finely targeting ads to users who are most likely to respond to them.
Facebook's own research shows that divisive content more deeply engages users. And a Facebook executive recently revealed that "right-wing populism is always more engaging" than centrist or progressive fare, with content that touches on topics such as "nation, protection, the other, anger, [and] fear."
The guidelines of what is acceptable or banned on the platform are sprawled across a confusing labyrinth of pages that make up Facebook's humongous corporate site. You'll need to look at all of the Community Standards, Terms of Service, Newsroom and Help Center posts to begin to get a clear picture. And just when you think you've figured it out, Facebook announces another policy change or tweaks its standards.
By intentionally seeding this patchwork, Facebook creates the illusion of transparency. This tangle of conflicting policies and webpages -- in addition to the patchwork of internal guidelines to content moderators -- allows Facebook to systematically break its promise to keep people safe, allowing white supremacists and hate groups to use the platform to spread brutal ideologies, fundraise and organize events that incite violence.
We saw a tragic example of this just recently, when the event page of right-wing paramilitary group the Kenosha Guard invited its members to arrive armed at a racial-justice protest in Wisconsin. The event was flagged 455 times in a day and reviewed by four human moderators who determined it did not violate Facebook's policies. Facebook took notice only after a 17-year-old crossed state lines and killed two people protesting the police shooting of James Blake. Mark Zuckerberg called Facebook's failure to remove the page an "operational mistake."
These types of "mistakes" are inexcusable -- especially since the organization Muslim Advocates has been sounding alarms about event-page abuses since 2015. Facebook repeatedly ignored the group's prescient warnings and repeatedly failed to take down pages promoting hateful rallies. And even in the wake of Kenosha, Facebook has yet to modify its events policies to clearly indicate that calls to arms are prohibited.
In an attempt to placate critics, Facebook has announced a slew of policy changes in recent weeks. These include a ban on ads that "praise, support or represent militarized social movements," the addition of context to posts when a candidate or party prematurely declares victory, a ban on ads that delegitimize the election's outcome, and a ban on political ads after Election Day until a winner is officially declared.
Facebook also banned all pages by QAnon and other violent groups across its platforms (but not the content itself). The company also updated its hate-speech policy, adding Holocaust denial to the list of prohibited content. A New York Times opinion piece by Charlie Warzel noted that these changes are a "tacit admission that what is good for Facebook is, on the whole, destabilizing for society
Civil-rights and racial-justice organizations, scholars, activists and journalists have long pressured Facebook to make these kinds of changes -- and many others. And for its part, Facebook is spinning these updates as "progress," even though they came too late -- a month before the presidential election -- to repair the significant damage inflicted throughout 2020.
Clear action steps to mitigate hate online have been available to Facebook for years. In 2018, a coalition of groups, including Free Press, crafted Change the Terms -- a set of model policies to curb hateful activities on online platforms. Among the many recommendations is that platforms enforce policies in a transparent, equitable and culturally relevant way.
Despite its recent changes and enforcement actions, Facebook still has a long way to go to meaningfully address its failure to protect our communities and our democracy. People's lives are on the line.
I confess: I still have a Facebook account. This might seem unusual for someone who works on curbing the spread of hate speech online.
Much of my feed consists of puppies, babies and posts from my local Buy Nothing Group. Your Facebook feed probably looks a lot different than mine. We have different friends, different likes, different group affiliations and different interests. As a result, Facebook's algorithms deliver an experience unique to each of us, specifically designed to keep us hooked on the platform. On Facebook, none of us share the same reality.
Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
Just because some of us don't see hateful posts in our newsfeeds doesn't diminish the impact they have on those who do. Facebook has long deployed algorithms that amplify and spread these organic posts, ads, events, groups and pages that endanger people's lives.
The proliferation of divisive and hateful content is baked into Facebook's business model: The platform profits from finely targeting ads to users who are most likely to respond to them.
Facebook's own research shows that divisive content more deeply engages users. And a Facebook executive recently revealed that "right-wing populism is always more engaging" than centrist or progressive fare, with content that touches on topics such as "nation, protection, the other, anger, [and] fear."
The guidelines of what is acceptable or banned on the platform are sprawled across a confusing labyrinth of pages that make up Facebook's humongous corporate site. You'll need to look at all of the Community Standards, Terms of Service, Newsroom and Help Center posts to begin to get a clear picture. And just when you think you've figured it out, Facebook announces another policy change or tweaks its standards.
By intentionally seeding this patchwork, Facebook creates the illusion of transparency. This tangle of conflicting policies and webpages -- in addition to the patchwork of internal guidelines to content moderators -- allows Facebook to systematically break its promise to keep people safe, allowing white supremacists and hate groups to use the platform to spread brutal ideologies, fundraise and organize events that incite violence.
We saw a tragic example of this just recently, when the event page of right-wing paramilitary group the Kenosha Guard invited its members to arrive armed at a racial-justice protest in Wisconsin. The event was flagged 455 times in a day and reviewed by four human moderators who determined it did not violate Facebook's policies. Facebook took notice only after a 17-year-old crossed state lines and killed two people protesting the police shooting of James Blake. Mark Zuckerberg called Facebook's failure to remove the page an "operational mistake."
These types of "mistakes" are inexcusable -- especially since the organization Muslim Advocates has been sounding alarms about event-page abuses since 2015. Facebook repeatedly ignored the group's prescient warnings and repeatedly failed to take down pages promoting hateful rallies. And even in the wake of Kenosha, Facebook has yet to modify its events policies to clearly indicate that calls to arms are prohibited.
In an attempt to placate critics, Facebook has announced a slew of policy changes in recent weeks. These include a ban on ads that "praise, support or represent militarized social movements," the addition of context to posts when a candidate or party prematurely declares victory, a ban on ads that delegitimize the election's outcome, and a ban on political ads after Election Day until a winner is officially declared.
Facebook also banned all pages by QAnon and other violent groups across its platforms (but not the content itself). The company also updated its hate-speech policy, adding Holocaust denial to the list of prohibited content. A New York Times opinion piece by Charlie Warzel noted that these changes are a "tacit admission that what is good for Facebook is, on the whole, destabilizing for society
Civil-rights and racial-justice organizations, scholars, activists and journalists have long pressured Facebook to make these kinds of changes -- and many others. And for its part, Facebook is spinning these updates as "progress," even though they came too late -- a month before the presidential election -- to repair the significant damage inflicted throughout 2020.
Clear action steps to mitigate hate online have been available to Facebook for years. In 2018, a coalition of groups, including Free Press, crafted Change the Terms -- a set of model policies to curb hateful activities on online platforms. Among the many recommendations is that platforms enforce policies in a transparent, equitable and culturally relevant way.
Despite its recent changes and enforcement actions, Facebook still has a long way to go to meaningfully address its failure to protect our communities and our democracy. People's lives are on the line.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.