The\u0026nbsp;U.S. government\u0026#039;s\u0026nbsp;first major federal study of facial recognition surveillance, released Thursday, shows the technology\u0026#039;s extreme racial and gender biases, confirming what privacy and civil rights groups have warned about for years.In a study of 189 algorithms used by law enforcement agencies to match facial recognition images with names in state and federal databases, the National Institute of Standards and Technology (NIST) found that Asian American and African American men were misidentified 100 times as often as white men.The algorithms disproportionately favored white middle-aged men overall. Compared with young people, the elderly, and women of all ages, middle-aged white males were identified accurately most frequently, while Native American people were most frequently misidentified.Such misidentifications can lead to false arrests as well as inability to secure employment, housing, or credit, the MIT Media Lab said in a study it conducted in 2018.\u0022Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future,\u0022 tweeted Brianna Wu, a U.S. House candidate in Massachusetts. \u0022Tech is a new, terrifying frontier for civil rights.\u0022 Fact: Criminal courts are using algorithms for sentencing, mirroring past racial biases into the future. Facial recognition also has these biases that disadvantage PoC. Tech is a new, terrifying frontier for civil rights.https://t.co/BwHNJa1saO— Brianna Wu (@BriannaWu) December 19, 2019The NIST study echoed MIT Media Lab\u0026#039;s results in their study entitled \u0022Gender Shades,\u0022 in which researchers found that algorithms developed by three different companies most often misidentified women of color.NIST\u0026#039;s report is \u0022a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties,\u0022 Joy Buolamwini, lead author the Gender Shades report, told the Washington Post.Digital rights group Fight for the Future wrote on social media that the study demonstrated \u0022why dozens of groups and tens of thousands of people are calling on Congress to ban facial recognition.\u0022BREAKING: Landmark federal study confirms that current #facialrecognition systems exhibit significant racial bias. This is why dozens of groups and tens of thousands of people are calling on Congress to #BanFacialRecognition now https://t.co/LYJkozDxn4— Fight for the Future (@fightfortheftr) December 19, 2019Fight for the Future launched its #BanFacialRecognition campaign in July, calling on local, state, and federal governments to ban the use of the technology by law enforcement and other public agencies—instead of just regulating its use.\u0022Face recognition technology—accurate or not—can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.\u0022 —Jay Stanley, ACLU\u0022This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits,\u0022 Fight for the Future said when it launched the campaign.This week lawmakers in Alameda, Calif. became the latest local officials to vote for a ban.Despite warnings from Fight for the Future and other groups including the ACLU, which sued the federal government in October over its use of the technology, the FBI has run nearly 400,000 searches of local and federal databases using facial recognition since 2011.The algorithms studied by NIST were developed by companies including Microsoft, Intel, and Panasonic. Amazon, which developed facial recognition software called Rekognition, did not provide its algorithm for the study.\u0022Amazon is deeply cowardly when it comes to getting their facial recognition algorithm audited,\u0022 tweeted Cathy O\u0026#039;Neil, an algorithm auditor.Jay Stanley, a senior policy analyst at the ACLU, told the Post that inaccuracies in algorithms are \u0022only one concern\u0022 that civil liberties groups have about the surveillance programs that the federal government is now studying.\u0022Face recognition technology—accurate or not—can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale,\u0022 Stanley said.