

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

A young Cambodian woman uses her laptop in a local restaurant in Phnom Penh, Cambodia. (Photo: Omar Havana/Getty Images)
A crowd-sourced study by a leading humanitarian group and global artificial intelligence company confirms that women around the world face abuse when using Twitter--and that without the social media platform's commitment to combating such treatment, pervasive online abuse will continue to have the effect of harming and silencing women, particularly women of color.
With the help of the artificial intelligence software company Element AI and 6,500 volunteers who sifted through more than 200,000 tweets sent to women in 2017, Amnesty International reported in their study, entitled "Troll Patrol," that women were mentioned in 1.1 million abusive or problematic tweets last year.
The tweets that the volunteers studied were sent to 778 female politicians and journalists, but that data was used to estimate that an abusive tweet is sent to a woman once every 30 seconds.
Women of color were about 34 percent more likely to face abuse on Twitter than white women, with black women particularly at risk--receiving 84 percent more abusive tweets than their white counterparts.
"With the help of technical experts and thousands of volunteers, we have built the world's largest crowdsourced dataset about online abuse against women," said Milena Marin, senior advisor for tactical research at Amnesty International. "Troll Patrol means we have the data to back up what women have long been telling us--that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked."
On Twitter, many women said they were not surprised by the findings, but expressed appreciation for a wide-ranging study that confirmed their own experiences online.
"Amnesty's crowdsourcing is the most revealing data available for a problem that so many people know about but haven't been able to quantify," wrote Emily Dreyfuss at Wired.
In the tweets the volunteers studied, women across the political spectrum faced abuse, but left-leaning politicians were 23 percent more likely to receive threatening or abusive tweets. Meanwhile, right-wing female journalists were 64 percent more likely to be targeted on Twitter.
"Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices." --Milena Marin, Amnesty InternationalThe study was borne out of Amnesty International's frustration with Twitter's dismissal of the issue of online abuse. CEO Jack Dorsey has repeatedly expressed his understandable view that the company aims to protect free speech rights and avoids banning users--but Amnesty took issue with the company's refusal to release "meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it," and conducted the study to demonstrate how many women are facing abuse on Dorsey's platform every day.
When the group shared the data with Twitter, the company only asked for clarification on how Amnesty defines a "problematic" tweet, "in accordance with the need to protect free expression."
Amnesty defines problematic tweets as containing "hurtful or hostile content, especially if repeated to an individual on multiple or occasions."
Such tweets "can reinforce negative or harmful stereotypes against a group of individuals (e.g. negative stereotypes about a race or people who follow a certain religion)" and "may still have the effect of silencing an individual or groups of individuals" even though they don't fit Twitter's definition of abuse.
"Troll Patrol isn't about policing Twitter or forcing it to remove content," Marin said. "We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on."
Given that women of color face disproportionate abuse on the platform, she added, "Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices."
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
A crowd-sourced study by a leading humanitarian group and global artificial intelligence company confirms that women around the world face abuse when using Twitter--and that without the social media platform's commitment to combating such treatment, pervasive online abuse will continue to have the effect of harming and silencing women, particularly women of color.
With the help of the artificial intelligence software company Element AI and 6,500 volunteers who sifted through more than 200,000 tweets sent to women in 2017, Amnesty International reported in their study, entitled "Troll Patrol," that women were mentioned in 1.1 million abusive or problematic tweets last year.
The tweets that the volunteers studied were sent to 778 female politicians and journalists, but that data was used to estimate that an abusive tweet is sent to a woman once every 30 seconds.
Women of color were about 34 percent more likely to face abuse on Twitter than white women, with black women particularly at risk--receiving 84 percent more abusive tweets than their white counterparts.
"With the help of technical experts and thousands of volunteers, we have built the world's largest crowdsourced dataset about online abuse against women," said Milena Marin, senior advisor for tactical research at Amnesty International. "Troll Patrol means we have the data to back up what women have long been telling us--that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked."
On Twitter, many women said they were not surprised by the findings, but expressed appreciation for a wide-ranging study that confirmed their own experiences online.
"Amnesty's crowdsourcing is the most revealing data available for a problem that so many people know about but haven't been able to quantify," wrote Emily Dreyfuss at Wired.
In the tweets the volunteers studied, women across the political spectrum faced abuse, but left-leaning politicians were 23 percent more likely to receive threatening or abusive tweets. Meanwhile, right-wing female journalists were 64 percent more likely to be targeted on Twitter.
"Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices." --Milena Marin, Amnesty InternationalThe study was borne out of Amnesty International's frustration with Twitter's dismissal of the issue of online abuse. CEO Jack Dorsey has repeatedly expressed his understandable view that the company aims to protect free speech rights and avoids banning users--but Amnesty took issue with the company's refusal to release "meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it," and conducted the study to demonstrate how many women are facing abuse on Dorsey's platform every day.
When the group shared the data with Twitter, the company only asked for clarification on how Amnesty defines a "problematic" tweet, "in accordance with the need to protect free expression."
Amnesty defines problematic tweets as containing "hurtful or hostile content, especially if repeated to an individual on multiple or occasions."
Such tweets "can reinforce negative or harmful stereotypes against a group of individuals (e.g. negative stereotypes about a race or people who follow a certain religion)" and "may still have the effect of silencing an individual or groups of individuals" even though they don't fit Twitter's definition of abuse.
"Troll Patrol isn't about policing Twitter or forcing it to remove content," Marin said. "We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on."
Given that women of color face disproportionate abuse on the platform, she added, "Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices."
A crowd-sourced study by a leading humanitarian group and global artificial intelligence company confirms that women around the world face abuse when using Twitter--and that without the social media platform's commitment to combating such treatment, pervasive online abuse will continue to have the effect of harming and silencing women, particularly women of color.
With the help of the artificial intelligence software company Element AI and 6,500 volunteers who sifted through more than 200,000 tweets sent to women in 2017, Amnesty International reported in their study, entitled "Troll Patrol," that women were mentioned in 1.1 million abusive or problematic tweets last year.
The tweets that the volunteers studied were sent to 778 female politicians and journalists, but that data was used to estimate that an abusive tweet is sent to a woman once every 30 seconds.
Women of color were about 34 percent more likely to face abuse on Twitter than white women, with black women particularly at risk--receiving 84 percent more abusive tweets than their white counterparts.
"With the help of technical experts and thousands of volunteers, we have built the world's largest crowdsourced dataset about online abuse against women," said Milena Marin, senior advisor for tactical research at Amnesty International. "Troll Patrol means we have the data to back up what women have long been telling us--that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked."
On Twitter, many women said they were not surprised by the findings, but expressed appreciation for a wide-ranging study that confirmed their own experiences online.
"Amnesty's crowdsourcing is the most revealing data available for a problem that so many people know about but haven't been able to quantify," wrote Emily Dreyfuss at Wired.
In the tweets the volunteers studied, women across the political spectrum faced abuse, but left-leaning politicians were 23 percent more likely to receive threatening or abusive tweets. Meanwhile, right-wing female journalists were 64 percent more likely to be targeted on Twitter.
"Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices." --Milena Marin, Amnesty InternationalThe study was borne out of Amnesty International's frustration with Twitter's dismissal of the issue of online abuse. CEO Jack Dorsey has repeatedly expressed his understandable view that the company aims to protect free speech rights and avoids banning users--but Amnesty took issue with the company's refusal to release "meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it," and conducted the study to demonstrate how many women are facing abuse on Dorsey's platform every day.
When the group shared the data with Twitter, the company only asked for clarification on how Amnesty defines a "problematic" tweet, "in accordance with the need to protect free expression."
Amnesty defines problematic tweets as containing "hurtful or hostile content, especially if repeated to an individual on multiple or occasions."
Such tweets "can reinforce negative or harmful stereotypes against a group of individuals (e.g. negative stereotypes about a race or people who follow a certain religion)" and "may still have the effect of silencing an individual or groups of individuals" even though they don't fit Twitter's definition of abuse.
"Troll Patrol isn't about policing Twitter or forcing it to remove content," Marin said. "We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on."
Given that women of color face disproportionate abuse on the platform, she added, "Twitter's failure to crack down on this problem means it is contributing to the silencing of already marginalized voices."