
A display shows a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference, which showcases artificial intelligence, deep learning, virtual reality, and autonomous machines, in Washington, D.C., November 1, 2017. (Photo: Saul Loeb/AFP via Getty Images)
Federal Study on Racial Biases in Facial Recognition Technology Confirms Warnings of Civil Liberties Groups
African American and Asian American men were misidentified 100 times as often as white men.
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. The final deadline for our crucial Summer Campaign fundraising drive is just days away, and we’re falling short of our must-hit goal. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.
The U.S. government's first major federal study of facial recognition surveillance, released Thursday, shows the technology's extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.
In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.
The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.
Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.
"Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future," tweeted Brianna Wu, a U.S. House candidate in Massachusetts. "Tech is a new, terrifying frontier for civil rights."
\u201cFact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. \n\nFacial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights. \n\n https://t.co/BwHNJa1saO\u201d— Brianna Wu (@Brianna Wu) 1576791332
The NIST study echoed MIT Media Lab's results in their study entitled "Gender Shades," in which researchers found that algorithms developed by three different companies most often misidentified women of color.
NIST's report is "a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties," Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.
Digital rights group Fight for the Future wrote on social media that the study demonstrated "why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition."
\u201cBREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4\u201d— @team@fightforthefuture.org on Mastodon (@@team@fightforthefuture.org on Mastodon) 1576786519
Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies--instead of just regulating its use.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale." --Jay Stanley, ACLU"This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits," Fight for the Future said when it launched the campaign.
This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.
Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.
The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.
"Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited," tweeted Cathy O'Neil, an algorithm auditor.
Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are "only one concern" that civil liberties groups have about the surveillance programs that the federal government is now studying.
"Face recognition technology--accurate or not--can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale," Stanley said.