Skip to main content

Sign up for our newsletter.

Quality journalism. Progressive values. Direct to your inbox.

There are only a few days left in our critical Mid-Year Campaign and we truly might not make it without your help.
Please join us. If you rely on independent media, support Common Dreams today. This is crunch time. We need you now.

Join the small group of generous readers who donate, keeping Common Dreams free for millions of people each year. Without your help, we won’t survive.

"Biased artificial intelligence systems have become embedded in the fabric of our digital society and they must be rooted out," says Sen. Ed Markey, a Democrat from Massachusetts. (Photo: ljubaphoto/Getty Images)

"In order for a regulatory system targeting digital platforms to be effective, it must maximize user benefits while minimizing the possible harms," writes Sara Collins of Public Knowledge. (Photo: ljubaphoto/Getty Images)

The Privacy Debate Reveals How Big Tech’s “Transparency and User Control” Arguments Fall Flat

Giving users the illusion of control just isn’t a viable regulatory strategy.

Sara Collins

 by Public Knowledge

If you’ve been following the Capitol Hill hearings about algorithms and automated decision-making, then you’ve probably heard technology companies talk about how they offer or want to offer “transparency and user control” to consumers. Companies like Facebook and Twitter propose sharing information on how their algorithms work with the public, as well as enabling users to tweak these algorithms to change their experience on digital platforms. They argue that users are less susceptible to manipulation if they can control how a digital platform’s algorithm delivers content to them. While this seems persuasive, this kind of regulatory regime poses significant dangers. It cannot address harms like discrimination, loss of economic opportunity for content creators, or radicalization of users. Luckily, we’ve already had this conversation on privacy — and can choose not to make the same mistakes twice.

For much of the internet’s history, the U.S. has protected users’ privacy online (to the extent you can say it has been protected) through a “notice and choice” framework. Companies are expected to explain to users (usually through unreadable privacy policies) what data is collected and how it will be used. Sometimes, although not always, websites and applications will ask users to check boxes indicating that they have read and agreed to the privacy policy. Subsequent use of the website signals that users have been given notice about the sites’ data practices and have chosen to accept those data uses. Notice a problem?

The “notice and choice” framework simply can’t protect internet users.

This framework assumes a few things that we know are not true: 1) that users read privacy policies; 2) that they understand what the policies say; and 3) that they have a practical choice about whether or not to use a website or application under those conditions. These assumptions are wrong, and, therefore, the “notice and choice” framework simply can’t protect internet users. Instead, what happens is that users drown in information they can’t be expected to read and understand, while companies are allowed to siphon data from those very same users that they then exploit. And it’s not 1998 anymore — not using the internet’s most dominant websites is not a viable choice.

The push for “transparency and user control” with regards to algorithms is reminiscent of the “notice and choice” framework. Both rely on users being able to understand the information they are given about a system, and then to make a choice about whether or how they will use it. If you thought making privacy policies readable was tough, try explaining an algorithm. And since the company is the one doing the explaining, it also provides an avenue for those very same companies to use dark patterns to circumvent user control. But the lack of true transparency isn’t even the major stumbling block for this method of regulation; it’s actually user choice.

In order for a regulatory system targeting digital platforms to be effective, it must maximize user benefits while minimizing the possible harms. Having users choose their algorithm does not address most of the harms that come with automated decision-making processes. First, there are plenty of algorithms that consumer data is used to power but that the consumer does not have a say in how or when they are used. Think, for example, about how businesses use algorithms tuned to the consumer’s information for hiring algorithms, remote proctoring software, tenant screening, and validating insurance claims, to name a few. (Granted, even if a person had some choice or control in how those types of algorithms were used, there would still be serious harms associated with them.) However, let’s narrow our focus to the use cases that Congress has chosen to focus on this year — content recommendation and curation algorithms. Even if we limit the “transparency and user control” proposal to content delivery platforms like Facebook, Twitter, or YouTube, it still won’t protect users.

The way people talk about recommendation engines or feeds would suggest that they are operated by one centralized algorithm, but that isn’t what’s happening. Generally, there is an algorithm tracking your thousands of interactions with a particular platform that is attempting to guess the type of content you would like to view next, but there are also other algorithmic systems at work influencing what type of content you will see. These systems include content moderation algorithms that enforce community guidelines and standards; copyright identification algorithms that attempt to identify copyrighted material so that it may be taken down; and even advertising algorithms that determine what ads you see and where. For these algorithms to do their jobs, users can’t have a choice in the matter. Copyrighted work must be taken down per the Digital Millennium Copyright Act; community standards aren’t really community standards if they aren’t enforced; and platforms need to please their advertisers to continue making money. We are left with users being able to tinker at the edges of the algorithm, which may change their experience somewhat, but certainly won’t address the harms.

Algorithmic harms can be categorized into four major buckets, specifically loss of opportunity; economic loss; social detriment; and loss of liberty. To make these buckets more salient, let’s look at real-world examples. Facebook’s advertising system has been accused of discrimination in the delivery of insurance, housing, and employment ads. Maybe Facebook doesn’t actively choose to discriminate, but instead allows advertisers to determine who they want to see their ads. Advertisers use these targeting tools and their own past (often discriminatory) data to very granularly control who sees their ads. Even if an advertiser wasn’t intending to be discriminatory, the combination of granular targeting and biased data sets means people of color, women, and other marginalized groups often don’t see ads that would benefit them. The result is a loss of opportunity to change jobs, switch to better insurance, or find new housing opportunities in an area.

Even if an advertiser wasn’t intending to be discriminatory, the combination of granular targeting and biased data sets means people of color, women, and other marginalized groups often don’t see ads that would benefit them.

Furthermore, platforms aren’t just vehicles for viewing content — they can also help users make money. YouTube allows content creators to become “partners” with YouTube so that they can monetize their content. However, in recent years, these partner creators have accused YouTube of discrimination, saying that Black creators and LGBT creators have seen their content disproportionately de-monetized or even outright deleted with very little explanation. Those de-monetization and deletion decisions are generally not made by humans, but by the algorithm that is checking content to see if it complies with the site’s community standards. Ziggi, a black TikTok creator,  makes it difficult for marginalized creators to take full advantage of the economic opportunities presented by these kinds of platforms. Having users “choose” their preferred algorithm wouldn’t have stopped either of these harms from occurring.

You may wonder if allowing users to choose their algorithm would stop some of the socially detrimental harms, like radicalization or polarization. There is little evidence that users would actively choose to make their feeds more diverse and engage with a wider range of opinions and sources of information. And, in fact, Facebook itself found that users are more likely to engage with sensational or extreme content.In the consumer products context, flaws that created such harms would be called design flaws or defects and it would be the manufacturer’s obligation to fix them. Lawmakers should treat algorithms the same way. These are not tools consumers should be obligated to fix, the platforms (manufacturer) should bear that obligation.

Also, socially detrimental harms, like polarization and radicalization, can often lead to physical harm. While platforms themselves can’t take away a person’s life or liberty, that doesn’t mean they can’t be a contributing factor. Beyond the immense surveillance capabilities these platforms possess, they are also the breeding ground for violent political action like what was seen in Myanmar and our own Capitol. Those violent actions began on social media and were inflamed by how they curate content. Only Facebook, not users, can correct for these societal harms.

Given the depth and breadth of harms that can arise from algorithmic decision-making, it would be incredibly unwise for Congress to limit themselves to something as ineffective as “transparency and user control” for regulating it. I want to make clear that Public Knowledge isn’t opposed to platforms giving their users more visibility into opaque systems and more options with how to engage. That does provide benefits to users. But that is not how we are going to address harms like discrimination, radicalization, and economic inequality. These problems will require something more prescriptive, and be a part of a constellation of new tech regulations like a comprehensive federal privacy law, new competition rules, and even a digital regulator. Regulating algorithms is the next frontier of tech policy and Public Knowledge will continue to explore and evaluate possible solutions for this emerging field. Fortunately, Congress has the opportunity to learn the lessons that have already been taught in privacy. Giving users the illusion of control just isn’t a viable regulatory strategy.


Sara Collins

Sara Collins

Sara Collins is policy counsel at Public Knowledge.

Just a few days left in our crucial Mid-Year Campaign and we might not make it without your help.
Who funds our independent journalism? Readers like you who believe in our mission: To inform. To inspire. To ignite change for the common good. No corporate advertisers. No billionaire founder. Our non-partisan, nonprofit media model has only one source of revenue: The people who read and value this work and our mission. That's it.
And the model is simple: If everyone just gives whatever amount they can afford and think is reasonable—$3, $9, $29, or more—we can continue. If not enough do, we go dark.

All the small gifts add up to something otherwise impossible. Please join us today. Donate to Common Dreams. This is crunch time. We need you now.

Flint Residents 'Disgusted' After Court Throws Out Indictments of Top Officials

"It has become increasingly clear that the judicial system is not a viable option for a poor majority Black community facing injustice," said Flint Rising.

Jessica Corbett ·


Sanders, Fetterman Urge Buttigieg to Fine Airlines Over Flight Cancellations

"The American people are sick of airlines ripping them off, canceling flights at the last minute, and delaying flights for hours on end," said the Vermont senator.

Jake Johnson ·


In Blow to Voting Rights, SCOTUS Saves Louisiana's Racially Rigged Electoral Map

"Black Louisianans deserve fair representation. The fight for racial justice and equality is far from over," vowed one civil rights group.

Brett Wilkins ·


Watching US With Horror, European Groups Push Leaders to Strengthen Abortion Rights

"This is an important moment for leaders across Europe who are committed to reproductive rights to lead by example and galvanize action in their own countries," said one campaigner.

Jake Johnson ·


Women Face Chaos, Torment as Abortion Clinics Shutter Across US

Clinic workers are attempting to get patients appointments out-of-state while women stockpile emergency contraception, fearing overcrowded clinics even in states that protect reproductive rights.

Julia Conley ·

Common Dreams Logo