
Tech and social media companies must change their terms of use and adopt corporate policies that more aggressively disrupt hateful online activities. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. (Photo: Jenny Kane/Associated Press)
Tech Companies Need to Get Serious About Confronting Hate
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. It is clear they need to do much more.
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. The final deadline for our crucial Summer Campaign fundraising drive is just days away, and we’re falling short of our must-hit goal. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.
There are constant global reminders of the role that tech companies play in fueling the rampant rise of white supremacy. Following the massacre at two mosques in Christchurch, New Zealand, social-media platforms struggled to keep the horrific video of the live shooting off of their platforms. And after the Tree of Life Synagogue shooting in Pittsburgh, "Kill All Jews" trended on Twitter, and antisemitism surged on Instagram.
In the wake of these massacres, where social media enabled individuals to both incite and then praise these hate-filled attacks, the House Judiciary Committee is holding a hearing on hate crimes and the rise of white nationalism.
Facebook, Twitter, YouTube and other large tech companies already have policies that ban hateful, violent, and harassing content. But the way these policies have been enforced has disproportionately silenced people of color speaking out against injustice and racism while ignoring how white supremacists use these platforms to spread their hateful ideology.
Though organizations like the Southern Poverty Law Center identify and monitor hate groups across the country, and publicly share their findings, many of these groups operate in the mainstream and have prominent pages on social-media platforms--allowing them to maintain a veneer of legitimacy.
These hate groups also use popular memes and game social-media algorithms to ensure that their toxic ideas spread to the widest possible audiences.
Tech companies are falling prey to this manipulation--and they have their own inadequate content-moderation policies and enforcement mechanisms to blame.
For example, Facebook recently announced a ban on content that praises or supports white nationalism. However, one week after the policy was in place, Facebook determined that a racist video from a Canadian white nationalist that "laments white 'replacement'" doesn't violate its new policy. If that's the case, it's hard to imagine what Facebook would balk at.
In a Washington Post piece written as part of his apology tour following a year of company scandals, Facebook CEO Mark Zuckerberg seemed to acknowledge the need for better content moderation. He wrote, "Internet companies should be accountable for enforcing standards on harmful content ... we need a more standardized approach."
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship.
It's time to put Mr. Zuckerberg's words into action. Tech companies can no longer sit idly by as white supremacists use their platforms to organize, recruit and fund hateful activities online--leading way too often to offline violence.
A way forward for these tech companies is to change the terms and adopt corporate policies that would disrupt hateful activities online. Free Press and several allies developed a set of such policies, which more than 50 civil- and human-rights groups have endorsed. These recommendations include guidance on enforcement, transparency, the right of appeal, governance and providing content moderators with well-informed training materials.
Asking tech companies to be accountable and enforce their policies when confronting racism and other forms of hate on their platforms is not a call for censorship. It also isn't a violation of people's First Amendment rights. After all, the First Amendment applies only to government interference in speech and will never be about the right to amplification on tech platforms.
Racists have long weaponized the media to legitimize genocide and slavery and to threaten and harass people of color. Hate speech has real-world impacts: It's used to silence women, people of color and religious minorities and to spur violence against these marginalized communities.
What's clear is that the status quo isn't working. Online platforms must mobilize money, time and resources to end hateful activities online.
If the latest massacres teach us anything, it's that tech companies need exactly the type of enforcement, increased transparency and corporate accountability that these model corporate policies recommend. It's time to change the terms.