Aug 26, 2021
Digital rights advocates reacted harshly Thursday to a new internal U.S. government report detailing how ten federal agencies have plans to greatly expand their reliance on facial recognition in the years ahead.
The Government Accountability Office surveyed federal agencies and found ten have specific plans to increase their use of the technology by 2023--surveilling people for numerous reasons including to identify criminal suspects, track government employees' level of alertness, and match faces of people on government property with names on watch lists.
The report (pdf) was released as lawmakers face pressure to pass legislation to limit the use of facial recognition technology by the government and law enforcement agencies.
Sens. Ron Wyden (D-Ore.) and Rand Paul (D-Ky.) introduced the Fourth Amendment Is Not for Sale Act in April to prevent agencies from using "illegitimately obtained" biometric data, such as photos from the software company Clearview AI. The company has scraped billions of photos from social media platforms without approval and is currently used by hundreds of police departments across the United States.
The bill has not received a vote in either chamber of Congress yet.
The plans described in the GAO report, tweeted law professor Andrew Ferguson, author of "The Rise of Big Data Policing," are "what happens when Congress fails to act."
\u201cNew GAO report on #FacialRecognition shows what happens when Congress fails to act. Train is rolling down the tracks folks\u2026 I don\u2019t see federal brakes. #surveillance https://t.co/8A8w8MTBf0\u201d— Andrew G. Ferguson (@Andrew G. Ferguson) 1629948268
Six agencies including the Departments of Homeland Security (DHS), Justice (DOJ), Defense (DOD), Health and Human Services (HHS), Interior, and Treasury plan to expand their use of facial recognition technology to "generate leads in criminal investigations, such as identifying a person of interest, by comparing their image against mugshots," the GAO reported.
DHS, DOJ, HHS, and the Interior all reported using Clearview AI to compare images with "publicly available images" from social media.
The DOJ, DOD, HHS, Department of Commerce, and Department of Energy said they plan to use the technology to maintain what the report calls "physical security," by monitoring their facilities to determine if an individual on a government watchlist is present.
"For example, HHS reported that it used [a facial recognition technology] system (AnyVision) to monitor its facilities by searching live camera feeds in real-time for individuals on watchlists or suspected of criminal activity, which reduces the need for security guards to memorize these individuals' faces," the report reads. "This system automatically alerts personnel when an individual on a watchlist is present."
The Electronic Frontier Foundation said the government's expanded use of the technology for law enforcement purposes is one of the "most disturbing" aspects of the GAO report.
"Face surveillance is so invasive of privacy, so discriminatory against people of color, and so likely to trigger false arrests, that the government should not be using face surveillance at all," the organization toldMIT Technology Review.
According to the Washington Post, three lawsuits have been filed in the last year by people who say they were wrongly accused of crimes after being mistakenly identified by law enforcement agencies using facial recognition technology. All three of the plaintiffs are Black men.
A federal study in 2019 showed that Asian and Black people were up to 100 times more likely to be misidentified by the technology than white men. Native Americans had the highest false identification rate.
Maine, Virginia, and Massachusetts have banned or sharply curtailed the use of facial recognition systems by government entities, and cities across the country including San Francisco, Portland, and New Orleans have passed strong ordinances blocking their use.
But many of the federal government's planned uses for the technology, Jake Laperruque of the Project on Government Oversight told the Post, "present a really big surveillance threat that only Congress can solve."
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
Digital rights advocates reacted harshly Thursday to a new internal U.S. government report detailing how ten federal agencies have plans to greatly expand their reliance on facial recognition in the years ahead.
The Government Accountability Office surveyed federal agencies and found ten have specific plans to increase their use of the technology by 2023--surveilling people for numerous reasons including to identify criminal suspects, track government employees' level of alertness, and match faces of people on government property with names on watch lists.
The report (pdf) was released as lawmakers face pressure to pass legislation to limit the use of facial recognition technology by the government and law enforcement agencies.
Sens. Ron Wyden (D-Ore.) and Rand Paul (D-Ky.) introduced the Fourth Amendment Is Not for Sale Act in April to prevent agencies from using "illegitimately obtained" biometric data, such as photos from the software company Clearview AI. The company has scraped billions of photos from social media platforms without approval and is currently used by hundreds of police departments across the United States.
The bill has not received a vote in either chamber of Congress yet.
The plans described in the GAO report, tweeted law professor Andrew Ferguson, author of "The Rise of Big Data Policing," are "what happens when Congress fails to act."
\u201cNew GAO report on #FacialRecognition shows what happens when Congress fails to act. Train is rolling down the tracks folks\u2026 I don\u2019t see federal brakes. #surveillance https://t.co/8A8w8MTBf0\u201d— Andrew G. Ferguson (@Andrew G. Ferguson) 1629948268
Six agencies including the Departments of Homeland Security (DHS), Justice (DOJ), Defense (DOD), Health and Human Services (HHS), Interior, and Treasury plan to expand their use of facial recognition technology to "generate leads in criminal investigations, such as identifying a person of interest, by comparing their image against mugshots," the GAO reported.
DHS, DOJ, HHS, and the Interior all reported using Clearview AI to compare images with "publicly available images" from social media.
The DOJ, DOD, HHS, Department of Commerce, and Department of Energy said they plan to use the technology to maintain what the report calls "physical security," by monitoring their facilities to determine if an individual on a government watchlist is present.
"For example, HHS reported that it used [a facial recognition technology] system (AnyVision) to monitor its facilities by searching live camera feeds in real-time for individuals on watchlists or suspected of criminal activity, which reduces the need for security guards to memorize these individuals' faces," the report reads. "This system automatically alerts personnel when an individual on a watchlist is present."
The Electronic Frontier Foundation said the government's expanded use of the technology for law enforcement purposes is one of the "most disturbing" aspects of the GAO report.
"Face surveillance is so invasive of privacy, so discriminatory against people of color, and so likely to trigger false arrests, that the government should not be using face surveillance at all," the organization toldMIT Technology Review.
According to the Washington Post, three lawsuits have been filed in the last year by people who say they were wrongly accused of crimes after being mistakenly identified by law enforcement agencies using facial recognition technology. All three of the plaintiffs are Black men.
A federal study in 2019 showed that Asian and Black people were up to 100 times more likely to be misidentified by the technology than white men. Native Americans had the highest false identification rate.
Maine, Virginia, and Massachusetts have banned or sharply curtailed the use of facial recognition systems by government entities, and cities across the country including San Francisco, Portland, and New Orleans have passed strong ordinances blocking their use.
But many of the federal government's planned uses for the technology, Jake Laperruque of the Project on Government Oversight told the Post, "present a really big surveillance threat that only Congress can solve."
Digital rights advocates reacted harshly Thursday to a new internal U.S. government report detailing how ten federal agencies have plans to greatly expand their reliance on facial recognition in the years ahead.
The Government Accountability Office surveyed federal agencies and found ten have specific plans to increase their use of the technology by 2023--surveilling people for numerous reasons including to identify criminal suspects, track government employees' level of alertness, and match faces of people on government property with names on watch lists.
The report (pdf) was released as lawmakers face pressure to pass legislation to limit the use of facial recognition technology by the government and law enforcement agencies.
Sens. Ron Wyden (D-Ore.) and Rand Paul (D-Ky.) introduced the Fourth Amendment Is Not for Sale Act in April to prevent agencies from using "illegitimately obtained" biometric data, such as photos from the software company Clearview AI. The company has scraped billions of photos from social media platforms without approval and is currently used by hundreds of police departments across the United States.
The bill has not received a vote in either chamber of Congress yet.
The plans described in the GAO report, tweeted law professor Andrew Ferguson, author of "The Rise of Big Data Policing," are "what happens when Congress fails to act."
\u201cNew GAO report on #FacialRecognition shows what happens when Congress fails to act. Train is rolling down the tracks folks\u2026 I don\u2019t see federal brakes. #surveillance https://t.co/8A8w8MTBf0\u201d— Andrew G. Ferguson (@Andrew G. Ferguson) 1629948268
Six agencies including the Departments of Homeland Security (DHS), Justice (DOJ), Defense (DOD), Health and Human Services (HHS), Interior, and Treasury plan to expand their use of facial recognition technology to "generate leads in criminal investigations, such as identifying a person of interest, by comparing their image against mugshots," the GAO reported.
DHS, DOJ, HHS, and the Interior all reported using Clearview AI to compare images with "publicly available images" from social media.
The DOJ, DOD, HHS, Department of Commerce, and Department of Energy said they plan to use the technology to maintain what the report calls "physical security," by monitoring their facilities to determine if an individual on a government watchlist is present.
"For example, HHS reported that it used [a facial recognition technology] system (AnyVision) to monitor its facilities by searching live camera feeds in real-time for individuals on watchlists or suspected of criminal activity, which reduces the need for security guards to memorize these individuals' faces," the report reads. "This system automatically alerts personnel when an individual on a watchlist is present."
The Electronic Frontier Foundation said the government's expanded use of the technology for law enforcement purposes is one of the "most disturbing" aspects of the GAO report.
"Face surveillance is so invasive of privacy, so discriminatory against people of color, and so likely to trigger false arrests, that the government should not be using face surveillance at all," the organization toldMIT Technology Review.
According to the Washington Post, three lawsuits have been filed in the last year by people who say they were wrongly accused of crimes after being mistakenly identified by law enforcement agencies using facial recognition technology. All three of the plaintiffs are Black men.
A federal study in 2019 showed that Asian and Black people were up to 100 times more likely to be misidentified by the technology than white men. Native Americans had the highest false identification rate.
Maine, Virginia, and Massachusetts have banned or sharply curtailed the use of facial recognition systems by government entities, and cities across the country including San Francisco, Portland, and New Orleans have passed strong ordinances blocking their use.
But many of the federal government's planned uses for the technology, Jake Laperruque of the Project on Government Oversight told the Post, "present a really big surveillance threat that only Congress can solve."
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.