Rep. Alexandria Ocasio-Cortez on Wednesday tied "the political reality that there is a global rise in authoritarianism and fascism" to concerns about the threats that facial recognition technology poses to Americans' civil rights and liberties.
"I don't want to see an authoritarian surveillance state," the New York Democrat told reporters on Capitol Hill, "whether it's run by a government or whether it's run by five corporations."
We caught up with @AOC & asked about Big Tech after the House Oversight Committee hearing on facial recognition today. “I don’t want to see an authoritarian surveillance state, whether it’s run by a government or whether it’s run by five corporations” pic.twitter.com/jwVlOsl3m6
— Davey Alba (@daveyalba) May 23, 2019
Ocasio-Cortez took questions from reporters after she participated in a House Oversight Committee hearing about "the use of facial recognition technology by government and commercial entities and the need for oversight on how this technology is used on civilians." Earlier this month, San Francisco passed the nation's first ban on the technology, barring city police and agencies—though not corporations or private citizens—from using it. However, there are no such federal rules on the technology.
The committee hearing featured testimonies from advocates and experts, including ACLU senior legislative counsel Neema Singh Guliani; Clare Garvie, a senior associate at the Georgetown Law Center's privacy and technology think tank; and MIT researcher Joy Buolamwini, who founded the Algorithmic Justice League and has researched race and gender bias in facial recognition algorithms.
In an exchange circulated on social media, Ocasio-Cortez asked Buolamwini if the algorithms are mostly designed by white men, most effective at identifying white men, and have trouble correctly identifying people of color and people of different gender expressions—all of which the expert confirmed.
Ocasio-Cortez then asked, "Do you think this could exacerbate the already egregious inequalities in our criminal justice system?"
Buolamwini responded, "It already is."
.@AOC: Are algorithms most effective on women?
On people of different gender expressions?
-No, they exclude them
So what demographic is it mostly effective on?
SCROLL TO CONTINUE WITH CONTENT
Get our best delivered to your inbox.
Who are the primary engineers of the algorithms?
-Definitely white men pic.twitter.com/L1z2eLCVcg
— Public Citizen (@Public_Citizen) May 22, 2019
During the hearing, Ocasio-Cortez also touched on how facial recognition technology relates to constitutionally protected privacy rights.
Don’t want Amazon & Gov agencies selling + harvesting your facial recognition data without your knowledge?
Welcome to the fight for Roe v Wade!
Welcome to the fight for criminal justice reform!
Welcome to the fight for privacy!
Welcome to the fight for the 4th + 14th amendments! https://t.co/Ez6DJjfaAz
— Alexandria Ocasio-Cortez (@AOC) May 22, 2019
Despite concerns about privacy rights, the consequences of misidentification, and the potential for misuse of collected data, facial recognition technology has been used by airlines, government agencies, police, retailers, and schools across the country—all in the absence of nationwide regulations.
Some efforts at major U.S. tech companies to self-regulate have failed. Fast Company noted in a report Wednesday:
The facial recognition hearing happened on the same day that Amazon's investors voted down a measure seeking to stop the sale of [the company's face software] Rekognition to government agencies. Shareholders also voted down a measure directing Amazon to perform a review of the civil liberties implications of its face recognition technology. The measures were proposed by activist investors and employees, partly based on Buolamwini's research, but were vehemently opposed by Amazon, which argued that fears around bias and misuse of face recognition were "insignificant" to the company.
Civil and human rights advocates have pressured corporate giants in the field to refuse to sell facial recognition technology to governments across the globe, citing concerns about abuse. In January, 90 groups sent letters to executives at Amazon, Google, and Microsoft with that demand, warning that surveillance technology can endanger people—particularly marginalized communities—and undermine public trust.
As Democrats and Republicans in Congress consider how to regulate facial recognition tools in the United States, debates about the technology are raging worldwide. Privacy and data protection rights are at the center of an ongoing court case in the United Kingdom about police use of facial recognition cameras.
Exemplifying concerns over how the technology is being deployed in the U.K., this video recently went viral online after Met Police stopped and ticketed a man for refusing to have his face scanned as he walked by surveillance cameras in London:
Meanwhile in China, as the New York Times reported last month, authorities are "using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority," in what experts described as "the first known example of a government intentionally using artificial intelligence for racial profiling."
Asked by a reporter on Wednesday about the potential for the United States—particularly under the Trump administration—to follow in the footsteps of China in using real-time surveillance as a form of social control, Ocasio-Cortez said, "I absolutely am concerned about that."