Apr 08, 2019
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
anti-semitismblack lives matterfacebookislamophobiamark zuckerbergmediaracismtechnologywhite supremacy
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.