
Facial recognition technology has been widely criticized for, among other things, misidentifying people of color. (Photo: Fractal Pictures/Shutterstock)
'Pivotal Moment' as Facebook Ditches 'Dangerous' Facial Recognition System
"We cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance," stressed one digital rights campaigner.
Digital rights advocates on Tuesday welcomed Facebook's announcement that it plans to jettison its facial recognition system, which critics contend is dangerous and often inaccurate technology abused by governments and corporations to violate people's privacy and other rights.
"Corporate use of face surveillance is very dangerous to people's privacy."
Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF) who last month called facial recognition technology "a special menace to privacy, racial justice, free expression, and information security," commended the new Facebook policy.
"Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology," he said. "Corporate use of face surveillance is very dangerous to people's privacy."
The social networking giant first introduced facial recognition software in late 2010 as a feature to help users identify and "tag" friends without the need to comb through photos. The company subsequently amassed one of the world's largest digital photo archives, which was largely compiled through the system. Facebook says over one billion of those photos will be deleted, although the company will keep DeepFace, the advanced algorithm that powers the facial recognition system.
In a blog post, Jerome Presenti, the vice president of artificial intelligence at Meta--the new name of Facebook's parent company following a rebranding last week that was widely condemned as a ploy to distract from recent damning whistleblower revelations--described the policy change as "one of the largest shifts in facial recognition usage in the technology's history."
"The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole," he wrote.
The New York Times reports:
Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement, and companies. In China, authorities use the capabilities to track and control the Uighurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests.
Concerns over actual and potential misuse of facial recognition systems have prompted bans on the technology in over a dozen U.S. locales, beginning with San Francisco in 2019 and subsequently proliferating from Portland, Maine to Portland, Oregon.
Caitlin Seeley George, campaign director at Fight for the Future, was among the online privacy campaigners who welcomed Facebook's move. In a statement, she said that "facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that."
Seeley George continued:
From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance.
"Even as algorithms improve, facial recognition will only be more dangerous," she argued. "This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft."
Seeley George says the "only logical action" for lawmakers and companies to take is banning facial recognition.
Amid applause for the company's announcement, some critics took exception to Facebook's retention of DeepFace, as well as its consideration of "potential future applications" for facial recognition technology.
Urgent. It's never been this bad.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission from the outset was simple. To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It’s never been this bad out there. And it’s never been this hard to keep us going. At the very moment Common Dreams is most needed and doing some of its best and most important work, the threats we face are intensifying. Right now, with just two days to go in our Spring Campaign, we're falling short of our make-or-break goal. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Can you make a gift right now to make sure Common Dreams not only survives but thrives? There is no backup plan or rainy day fund. There is only you. —Craig Brown, Co-founder |
Digital rights advocates on Tuesday welcomed Facebook's announcement that it plans to jettison its facial recognition system, which critics contend is dangerous and often inaccurate technology abused by governments and corporations to violate people's privacy and other rights.
"Corporate use of face surveillance is very dangerous to people's privacy."
Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF) who last month called facial recognition technology "a special menace to privacy, racial justice, free expression, and information security," commended the new Facebook policy.
"Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology," he said. "Corporate use of face surveillance is very dangerous to people's privacy."
The social networking giant first introduced facial recognition software in late 2010 as a feature to help users identify and "tag" friends without the need to comb through photos. The company subsequently amassed one of the world's largest digital photo archives, which was largely compiled through the system. Facebook says over one billion of those photos will be deleted, although the company will keep DeepFace, the advanced algorithm that powers the facial recognition system.
In a blog post, Jerome Presenti, the vice president of artificial intelligence at Meta--the new name of Facebook's parent company following a rebranding last week that was widely condemned as a ploy to distract from recent damning whistleblower revelations--described the policy change as "one of the largest shifts in facial recognition usage in the technology's history."
"The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole," he wrote.
The New York Times reports:
Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement, and companies. In China, authorities use the capabilities to track and control the Uighurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests.
Concerns over actual and potential misuse of facial recognition systems have prompted bans on the technology in over a dozen U.S. locales, beginning with San Francisco in 2019 and subsequently proliferating from Portland, Maine to Portland, Oregon.
Caitlin Seeley George, campaign director at Fight for the Future, was among the online privacy campaigners who welcomed Facebook's move. In a statement, she said that "facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that."
Seeley George continued:
From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance.
"Even as algorithms improve, facial recognition will only be more dangerous," she argued. "This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft."
Seeley George says the "only logical action" for lawmakers and companies to take is banning facial recognition.
Amid applause for the company's announcement, some critics took exception to Facebook's retention of DeepFace, as well as its consideration of "potential future applications" for facial recognition technology.
Digital rights advocates on Tuesday welcomed Facebook's announcement that it plans to jettison its facial recognition system, which critics contend is dangerous and often inaccurate technology abused by governments and corporations to violate people's privacy and other rights.
"Corporate use of face surveillance is very dangerous to people's privacy."
Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF) who last month called facial recognition technology "a special menace to privacy, racial justice, free expression, and information security," commended the new Facebook policy.
"Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology," he said. "Corporate use of face surveillance is very dangerous to people's privacy."
The social networking giant first introduced facial recognition software in late 2010 as a feature to help users identify and "tag" friends without the need to comb through photos. The company subsequently amassed one of the world's largest digital photo archives, which was largely compiled through the system. Facebook says over one billion of those photos will be deleted, although the company will keep DeepFace, the advanced algorithm that powers the facial recognition system.
In a blog post, Jerome Presenti, the vice president of artificial intelligence at Meta--the new name of Facebook's parent company following a rebranding last week that was widely condemned as a ploy to distract from recent damning whistleblower revelations--described the policy change as "one of the largest shifts in facial recognition usage in the technology's history."
"The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole," he wrote.
The New York Times reports:
Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement, and companies. In China, authorities use the capabilities to track and control the Uighurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests.
Concerns over actual and potential misuse of facial recognition systems have prompted bans on the technology in over a dozen U.S. locales, beginning with San Francisco in 2019 and subsequently proliferating from Portland, Maine to Portland, Oregon.
Caitlin Seeley George, campaign director at Fight for the Future, was among the online privacy campaigners who welcomed Facebook's move. In a statement, she said that "facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that."
Seeley George continued:
From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance.
"Even as algorithms improve, facial recognition will only be more dangerous," she argued. "This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft."
Seeley George says the "only logical action" for lawmakers and companies to take is banning facial recognition.
Amid applause for the company's announcement, some critics took exception to Facebook's retention of DeepFace, as well as its consideration of "potential future applications" for facial recognition technology.

