Apr 14, 2021
A coalition of more than 20 human rights organizations released an open letter Wednesday morning calling for a total ban on private and corporate use of facial recognition, an invasive technology that the groups characterized as discriminatory and "too dangerous to exist."
While campaigners have typically focused on the civil liberties threats posed by government and law enforcement use of facial recognition, the rights groups' new letter demands that businesses be barred from using the technology as well, warning that proliferation of face surveillance in the private sector would have serious privacy consequences for workers and consumers.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized."
--Tracy Rosenberg, Oakland Privacy
"In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information," the letter reads. "We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities."
The groups cite several examples of corporations using facial recognition in ways that threaten workers' rights, including Amazon's requirement that delivery drivers consent to allowing the company's artificial intelligence-equipped cameras to collect their biometric data and surveil their activity on the job. The coalition also points to Apple's facial recognition scans of its factory employees.
"These cases clearly show how private use of facial recognition by corporations, institutions, and even individuals poses just as much of a threat to marginalized communities as government use," the letter reads. "Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and 'productivity'--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk."
Several cities across the United States have banned government use of facial recognition, but just one--Portland, Oregon--has banned both public agencies and private corporations from wielding the technology to surveil workers and customers. The coalition hailed the Portland ban as a "template for other lawmakers to address the concerns with private and corporate use of the technology."
"Facial recognition technology poses serious threats to personal freedom. Letting this tool of authoritarian control spread throughout the private sector has serious implications for worker organizing rights and heightens the risk of catastrophic biometric data breaches," Tracy Rosenberg, advocacy director at Oakland Privacy, said in a statement Wednesday.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized," Rosenberg continued. "Facial recognition technology should be put back in the bottle."
Evan Greer, deputy director of Fight for the Future, stressed that mere reforms or self-regulation by private entities--such as opt-in consent measures--will not be enough to rein in the abusive technology.
"If employees have to agree to being under constant facial recognition surveillance in order to have a job, that's not meaningful consent," said Greer. "If a patient has to agree to have their biometric information collected in order to receive care at a hospital, that's not really consent. Even more innocuous uses, like getting your face scanned to buy a burrito, come with significant risks."
"The vast majority of people have no idea what the dangers of this technology are," Greer added, "and putting the onus on them fails to recognize power imbalances."
Read the full letter:
Wired has reported that Uber Eats drivers in the U.K. are being fired because of the company's faulty facial identification software, which requires drivers to submit selfies to confirm their identity. When the technology isn't able to match photos of the drivers with their accounts, drivers get booted off the system and are unable to work, and thus unable to pay their bills. This isn't the first time this has happened--in 2019 a Black Uber driver in the U.S. sued the company for its discriminatory facial recognition.
Cases like this are becoming increasingly prevalent: Amazon delivery drivers now have to agree to AI surveillance, including facial identification, or else lose their job, and Apple recently banned facial recognition on employees visiting manufacturing sites, but failed to apply this ban to also protect factory workers. This level of surveillance creates many problems, including suppressing worker efforts to organize and engage in collective action. In each of these cases frontline and marginalized workers are being targeted and their safety and rights are being undermined in favor of corporate surveillance, control, and power.
These cases clearly show how private use of facial recognition by corporations, institutions and even individuals poses just as much of a threat to marginalized communities as government use. Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and "productivity"--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk.
Using biometric surveillance technology in retail stores, hospitals, and healthcare settings, at concerts and sporting events, or in restaurants and bars will exacerbate existing discrimination. In the same way that Black and brown communities are targeted by police, companies can target certain communities with their facial recognition surveillance. A store could use a publicly available mugshot database to ban everyone with a criminal record from the store, which would disproportionately harm Black and brown people who are over-policed and over-represented in these databases. The impact of this would be compounded by the fact that facial recognition is notoriously bad at correctly identifying Black and brown faces. Overall this feeds a system of mass criminalization, where Black and brown people are treated as guilty everywhere they go.
Biometric surveillance is more like lead paint or nuclear weapons than firearms or alcohol. The severity and scale of harm that facial recognition technology can cause requires more than a regulatory framework. The vast majority of uses of this technology, whether by governments, private individuals, or institutions, should be banned. Facial recognition surveillance is inherently discriminatory. It cannot be reformed or regulated; it should be abolished.
In 2020, Portland, OR, passed a groundbreaking ban on private use of facial recognition, which smartly bans use in places of public accommodation as defined by the Americans with Disabilities Act. We believe this ordinance should be used as a template for more city, state, and federal legislation that bans private and corporate use of facial recognition surveillance.
In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist.
We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities. The dangers of facial recognition far outweigh any potential benefits, which is why banning both government and private use of facial recognition is the only way to keep everyone safe.
Signed,
American-Arab Anti-Discrimination Committee
Cryptoharlem
Daily Kos
Demand Progress
Electronic Privacy Information Center (EPIC)
Fight for the Future
Greenpeace USA
Massachusetts Jobs with Justice
MediaJustice
MPower Change
Muslim Justice League
Oakland Privacy
Open MIC (Open Media & Information Companies Initiative)
Presente
Privacy PDX
RAICES
Restore the Fourth
RootsAction.org
Secure Justice
S.T.O.P. (Surveillance Technology Oversight Project)
United We Dream
Join Us: News for people demanding a better world
Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place. We're hundreds of thousands strong, but every single supporter makes the difference. Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. |
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
A coalition of more than 20 human rights organizations released an open letter Wednesday morning calling for a total ban on private and corporate use of facial recognition, an invasive technology that the groups characterized as discriminatory and "too dangerous to exist."
While campaigners have typically focused on the civil liberties threats posed by government and law enforcement use of facial recognition, the rights groups' new letter demands that businesses be barred from using the technology as well, warning that proliferation of face surveillance in the private sector would have serious privacy consequences for workers and consumers.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized."
--Tracy Rosenberg, Oakland Privacy
"In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information," the letter reads. "We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities."
The groups cite several examples of corporations using facial recognition in ways that threaten workers' rights, including Amazon's requirement that delivery drivers consent to allowing the company's artificial intelligence-equipped cameras to collect their biometric data and surveil their activity on the job. The coalition also points to Apple's facial recognition scans of its factory employees.
"These cases clearly show how private use of facial recognition by corporations, institutions, and even individuals poses just as much of a threat to marginalized communities as government use," the letter reads. "Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and 'productivity'--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk."
Several cities across the United States have banned government use of facial recognition, but just one--Portland, Oregon--has banned both public agencies and private corporations from wielding the technology to surveil workers and customers. The coalition hailed the Portland ban as a "template for other lawmakers to address the concerns with private and corporate use of the technology."
"Facial recognition technology poses serious threats to personal freedom. Letting this tool of authoritarian control spread throughout the private sector has serious implications for worker organizing rights and heightens the risk of catastrophic biometric data breaches," Tracy Rosenberg, advocacy director at Oakland Privacy, said in a statement Wednesday.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized," Rosenberg continued. "Facial recognition technology should be put back in the bottle."
Evan Greer, deputy director of Fight for the Future, stressed that mere reforms or self-regulation by private entities--such as opt-in consent measures--will not be enough to rein in the abusive technology.
"If employees have to agree to being under constant facial recognition surveillance in order to have a job, that's not meaningful consent," said Greer. "If a patient has to agree to have their biometric information collected in order to receive care at a hospital, that's not really consent. Even more innocuous uses, like getting your face scanned to buy a burrito, come with significant risks."
"The vast majority of people have no idea what the dangers of this technology are," Greer added, "and putting the onus on them fails to recognize power imbalances."
Read the full letter:
Wired has reported that Uber Eats drivers in the U.K. are being fired because of the company's faulty facial identification software, which requires drivers to submit selfies to confirm their identity. When the technology isn't able to match photos of the drivers with their accounts, drivers get booted off the system and are unable to work, and thus unable to pay their bills. This isn't the first time this has happened--in 2019 a Black Uber driver in the U.S. sued the company for its discriminatory facial recognition.
Cases like this are becoming increasingly prevalent: Amazon delivery drivers now have to agree to AI surveillance, including facial identification, or else lose their job, and Apple recently banned facial recognition on employees visiting manufacturing sites, but failed to apply this ban to also protect factory workers. This level of surveillance creates many problems, including suppressing worker efforts to organize and engage in collective action. In each of these cases frontline and marginalized workers are being targeted and their safety and rights are being undermined in favor of corporate surveillance, control, and power.
These cases clearly show how private use of facial recognition by corporations, institutions and even individuals poses just as much of a threat to marginalized communities as government use. Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and "productivity"--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk.
Using biometric surveillance technology in retail stores, hospitals, and healthcare settings, at concerts and sporting events, or in restaurants and bars will exacerbate existing discrimination. In the same way that Black and brown communities are targeted by police, companies can target certain communities with their facial recognition surveillance. A store could use a publicly available mugshot database to ban everyone with a criminal record from the store, which would disproportionately harm Black and brown people who are over-policed and over-represented in these databases. The impact of this would be compounded by the fact that facial recognition is notoriously bad at correctly identifying Black and brown faces. Overall this feeds a system of mass criminalization, where Black and brown people are treated as guilty everywhere they go.
Biometric surveillance is more like lead paint or nuclear weapons than firearms or alcohol. The severity and scale of harm that facial recognition technology can cause requires more than a regulatory framework. The vast majority of uses of this technology, whether by governments, private individuals, or institutions, should be banned. Facial recognition surveillance is inherently discriminatory. It cannot be reformed or regulated; it should be abolished.
In 2020, Portland, OR, passed a groundbreaking ban on private use of facial recognition, which smartly bans use in places of public accommodation as defined by the Americans with Disabilities Act. We believe this ordinance should be used as a template for more city, state, and federal legislation that bans private and corporate use of facial recognition surveillance.
In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist.
We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities. The dangers of facial recognition far outweigh any potential benefits, which is why banning both government and private use of facial recognition is the only way to keep everyone safe.
Signed,
American-Arab Anti-Discrimination Committee
Cryptoharlem
Daily Kos
Demand Progress
Electronic Privacy Information Center (EPIC)
Fight for the Future
Greenpeace USA
Massachusetts Jobs with Justice
MediaJustice
MPower Change
Muslim Justice League
Oakland Privacy
Open MIC (Open Media & Information Companies Initiative)
Presente
Privacy PDX
RAICES
Restore the Fourth
RootsAction.org
Secure Justice
S.T.O.P. (Surveillance Technology Oversight Project)
United We Dream
A coalition of more than 20 human rights organizations released an open letter Wednesday morning calling for a total ban on private and corporate use of facial recognition, an invasive technology that the groups characterized as discriminatory and "too dangerous to exist."
While campaigners have typically focused on the civil liberties threats posed by government and law enforcement use of facial recognition, the rights groups' new letter demands that businesses be barred from using the technology as well, warning that proliferation of face surveillance in the private sector would have serious privacy consequences for workers and consumers.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized."
--Tracy Rosenberg, Oakland Privacy
"In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information," the letter reads. "We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities."
The groups cite several examples of corporations using facial recognition in ways that threaten workers' rights, including Amazon's requirement that delivery drivers consent to allowing the company's artificial intelligence-equipped cameras to collect their biometric data and surveil their activity on the job. The coalition also points to Apple's facial recognition scans of its factory employees.
"These cases clearly show how private use of facial recognition by corporations, institutions, and even individuals poses just as much of a threat to marginalized communities as government use," the letter reads. "Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and 'productivity'--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk."
Several cities across the United States have banned government use of facial recognition, but just one--Portland, Oregon--has banned both public agencies and private corporations from wielding the technology to surveil workers and customers. The coalition hailed the Portland ban as a "template for other lawmakers to address the concerns with private and corporate use of the technology."
"Facial recognition technology poses serious threats to personal freedom. Letting this tool of authoritarian control spread throughout the private sector has serious implications for worker organizing rights and heightens the risk of catastrophic biometric data breaches," Tracy Rosenberg, advocacy director at Oakland Privacy, said in a statement Wednesday.
"The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized," Rosenberg continued. "Facial recognition technology should be put back in the bottle."
Evan Greer, deputy director of Fight for the Future, stressed that mere reforms or self-regulation by private entities--such as opt-in consent measures--will not be enough to rein in the abusive technology.
"If employees have to agree to being under constant facial recognition surveillance in order to have a job, that's not meaningful consent," said Greer. "If a patient has to agree to have their biometric information collected in order to receive care at a hospital, that's not really consent. Even more innocuous uses, like getting your face scanned to buy a burrito, come with significant risks."
"The vast majority of people have no idea what the dangers of this technology are," Greer added, "and putting the onus on them fails to recognize power imbalances."
Read the full letter:
Wired has reported that Uber Eats drivers in the U.K. are being fired because of the company's faulty facial identification software, which requires drivers to submit selfies to confirm their identity. When the technology isn't able to match photos of the drivers with their accounts, drivers get booted off the system and are unable to work, and thus unable to pay their bills. This isn't the first time this has happened--in 2019 a Black Uber driver in the U.S. sued the company for its discriminatory facial recognition.
Cases like this are becoming increasingly prevalent: Amazon delivery drivers now have to agree to AI surveillance, including facial identification, or else lose their job, and Apple recently banned facial recognition on employees visiting manufacturing sites, but failed to apply this ban to also protect factory workers. This level of surveillance creates many problems, including suppressing worker efforts to organize and engage in collective action. In each of these cases frontline and marginalized workers are being targeted and their safety and rights are being undermined in favor of corporate surveillance, control, and power.
These cases clearly show how private use of facial recognition by corporations, institutions and even individuals poses just as much of a threat to marginalized communities as government use. Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers' movements and "productivity"--all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk.
Using biometric surveillance technology in retail stores, hospitals, and healthcare settings, at concerts and sporting events, or in restaurants and bars will exacerbate existing discrimination. In the same way that Black and brown communities are targeted by police, companies can target certain communities with their facial recognition surveillance. A store could use a publicly available mugshot database to ban everyone with a criminal record from the store, which would disproportionately harm Black and brown people who are over-policed and over-represented in these databases. The impact of this would be compounded by the fact that facial recognition is notoriously bad at correctly identifying Black and brown faces. Overall this feeds a system of mass criminalization, where Black and brown people are treated as guilty everywhere they go.
Biometric surveillance is more like lead paint or nuclear weapons than firearms or alcohol. The severity and scale of harm that facial recognition technology can cause requires more than a regulatory framework. The vast majority of uses of this technology, whether by governments, private individuals, or institutions, should be banned. Facial recognition surveillance is inherently discriminatory. It cannot be reformed or regulated; it should be abolished.
In 2020, Portland, OR, passed a groundbreaking ban on private use of facial recognition, which smartly bans use in places of public accommodation as defined by the Americans with Disabilities Act. We believe this ordinance should be used as a template for more city, state, and federal legislation that bans private and corporate use of facial recognition surveillance.
In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can't afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist.
We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities. The dangers of facial recognition far outweigh any potential benefits, which is why banning both government and private use of facial recognition is the only way to keep everyone safe.
Signed,
American-Arab Anti-Discrimination Committee
Cryptoharlem
Daily Kos
Demand Progress
Electronic Privacy Information Center (EPIC)
Fight for the Future
Greenpeace USA
Massachusetts Jobs with Justice
MediaJustice
MPower Change
Muslim Justice League
Oakland Privacy
Open MIC (Open Media & Information Companies Initiative)
Presente
Privacy PDX
RAICES
Restore the Fourth
RootsAction.org
Secure Justice
S.T.O.P. (Surveillance Technology Oversight Project)
United We Dream
We've had enough. The 1% own and operate the corporate media. They are doing everything they can to defend the status quo, squash dissent and protect the wealthy and the powerful. The Common Dreams media model is different. We cover the news that matters to the 99%. Our mission? To inform. To inspire. To ignite change for the common good. How? Nonprofit. Independent. Reader-supported. Free to read. Free to republish. Free to share. With no advertising. No paywalls. No selling of your data. Thousands of small donations fund our newsroom and allow us to continue publishing. Can you chip in? We can't do it without you. Thank you.