SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI.
We must understand that democracy is morphing with more technocratic systems of governance that lack full oversight and a clear understanding of their social and political impacts.
ChatGPT has become an overnight sensation wowing those who have tried it with an astonishing ability to churn out polished prose and answer complex questions. This generative AI platform has even passed an MBA test at the University of Pennsylvania’s Wharton School of Business and several other graduate-level exams. On one level, we have to admire humankind’s astonishing ability to invent and perfect such a device. But the deeper social and economic implications of ChatGPT and of other AI systems under rapid development are just beginning to be understood including their very real impacts on white-collar workers in the fields of education, law, criminal justice, and politics.
The use of AI systems in the political sphere raises some serious red flags. A Massachusetts Democrat in the U.S. House of Representatives, Jake Auchincloss, wasted no time using this untested and still poorly understood technology to deliver a speech on a bill supporting creation of a new artificial intelligence center. While points for cleverness are in order, the brief speech read by the Auchincloss on the floor of the U.S. House was actually written by ChatGPT. According to his staff, it was the first time that an AI-generated speech was made in Congress. Okay, we can look the other way on this one because Auchincloss was doing a little grandstanding and trying to prove a point. But what about Rep. Ted Lieu (D-Calif.) who used AI to write a bill to regulate AI and who now says he wants Congress to pass it?
The use of AI systems in the political sphere raises some serious red flags.
Not to go too deep into the sociological or philosophical weeds, but our current political nightmare is being played out in the midst of a postmodern epistemological crisis. We’ve gone from the rise of an Information Age to a somewhat darker place: a misinformation age where a high degree of political polarization now encourages us to reflexively question the veracity and accuracy of events and ideas on “the other side.” We increasingly argue less about ideas themselves than who said them and in what context. It’s well-known that the worst kind of argument is one where the two parties can’t even agree on the basic facts of a situation, and this is where we are today in our political theater.
Donald Trump introduced the notion of fake news, his “gift” to the electorate. We now question anything and everything that happens, with deep distrust in the mainstream media also contributing heavily to this habit of mind. This sets the stage for a new kind of political turmoil in which polarization threatens to gridlock and erode democracy even more. In this context, Hannah Arendt, an important political thinker about how democracies become less democratic, noted that: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction and the distinction between true and false no longer exist.”
David Bromwich—writing recently in The Nation—noted that Arendt believed there was “a totalitarian germ in the Western liberal political order.” Arendt is warning us and we should pay attention. The confusion and gridlock we experience today may give way to something even worse if we’re not vigilant. This is because the human mind seeks clarity and can only tolerate so much ambiguity. Authoritarian aspects of government that seek end runs around democratic norms offer a specious solution to this.
Into this heady mix of confusion, delusion, and bitter argument in U.S. politics, we now have a sophisticated AI system that’s capable of churning out massive amounts of content. This content could be in the form of text, images, photos, videos, documentaries, speeches, or just about anything that might cross our computer screens.
Let’s consider what this means. An organization could conceivably use ChatGPT or Google Deep Mind as a core informational interface to the Internet and all of the various platforms available on it. For example, a political organization could use AI to churn out tweets, press releases, speeches, position papers, clever slogans, and all manner of other content. Worse, when this device becomes an actual product available to corporations and government agencies or entities (such as political campaigns for example), organizations that can afford the price can purchase versions intended for private use that are far more powerful than the free model now available. (As with other services that exist in the Internet model, the free offering is just there to get us hooked.)
Imagine a world where large amounts of what you see and hear are shaped by these systems. Imagine AI systems starting to compete with each other using their ability to entice and manipulate public opinion. And let’s keep in mind that it was Elon Musk who started and still financially supports OpenAI, the company that built ChatGPT. This, of course, is the same Elon Musk who owns a company called Neuralink chartered with exploring how we can hook ourselves into computers with brain implants. Lest you think that’s an idea only intended for special medical purposes, this has now become “a thing.” At this year’s Davos event in January, a gathering of the most powerful people on the planet, Klaus Schwab was caught on video gushing about how wonderful it will be when we all have brain implants.
What can be done about these possible additional threats to our already faltering democracy? Will our dysfunctional Congress “get it” and take action? I had some experiences years ago that woke me up to the lack of technological expertise in Congress while serving as a consultant to the Congressional Office of Technology Assessment, attending White House events, and meeting with a Senator who headed up the House Telecommunications Subcommittee. Although this was several decades ago, I have no reason to believe that much has changed. Last year’s Facebook hearings with Mark Zuckerberg on the hot seat showed further evidence of how many in Congress don’t fully understand today’s technology advances, how they’re monetized, or how they impact us culturally and politically.
Technology and politics are now conjoined and are moving under the radar of the media and many legislators. Democracy is morphing with more technocratic systems of governance moving forward that lack full oversight and a clear understanding of their social and political impacts. Newer and still poorly understood hyper-technologies are also giving powerful corporations yet another way to creep into and influence the political landscape. The worst case scenario, of course, is full-on technocracy in which we hand over certain key operations of government decision-making to these untried and unproven systems.
This has already happened to a limited extent in criminal justice cases involving AI, evoking the dystopian movie Minority Report. A 2019 article in MIT’s Technology Review pointed out that use of AI and automated tools by police departments in some cases resulted in erroneous convictions and even imprisonment. Perhaps greater public awareness of AI systems and the threat they pose to democracy will precipitate a long overdue reckoning and reconsideration of these issues with our elected officials. Let’s hope so.
Donald Trump’s attacks on democracy, justice, and a free press are escalating — putting everything we stand for at risk. We believe a better world is possible, but we can’t get there without your support. Common Dreams stands apart. We answer only to you — our readers, activists, and changemakers — not to billionaires or corporations. Our independence allows us to cover the vital stories that others won’t, spotlighting movements for peace, equality, and human rights. Right now, our work faces unprecedented challenges. Misinformation is spreading, journalists are under attack, and financial pressures are mounting. As a reader-supported, nonprofit newsroom, your support is crucial to keep this journalism alive. Whatever you can give — $10, $25, or $100 — helps us stay strong and responsive when the world needs us most. Together, we’ll continue to build the independent, courageous journalism our movement relies on. Thank you for being part of this community. |
ChatGPT has become an overnight sensation wowing those who have tried it with an astonishing ability to churn out polished prose and answer complex questions. This generative AI platform has even passed an MBA test at the University of Pennsylvania’s Wharton School of Business and several other graduate-level exams. On one level, we have to admire humankind’s astonishing ability to invent and perfect such a device. But the deeper social and economic implications of ChatGPT and of other AI systems under rapid development are just beginning to be understood including their very real impacts on white-collar workers in the fields of education, law, criminal justice, and politics.
The use of AI systems in the political sphere raises some serious red flags. A Massachusetts Democrat in the U.S. House of Representatives, Jake Auchincloss, wasted no time using this untested and still poorly understood technology to deliver a speech on a bill supporting creation of a new artificial intelligence center. While points for cleverness are in order, the brief speech read by the Auchincloss on the floor of the U.S. House was actually written by ChatGPT. According to his staff, it was the first time that an AI-generated speech was made in Congress. Okay, we can look the other way on this one because Auchincloss was doing a little grandstanding and trying to prove a point. But what about Rep. Ted Lieu (D-Calif.) who used AI to write a bill to regulate AI and who now says he wants Congress to pass it?
The use of AI systems in the political sphere raises some serious red flags.
Not to go too deep into the sociological or philosophical weeds, but our current political nightmare is being played out in the midst of a postmodern epistemological crisis. We’ve gone from the rise of an Information Age to a somewhat darker place: a misinformation age where a high degree of political polarization now encourages us to reflexively question the veracity and accuracy of events and ideas on “the other side.” We increasingly argue less about ideas themselves than who said them and in what context. It’s well-known that the worst kind of argument is one where the two parties can’t even agree on the basic facts of a situation, and this is where we are today in our political theater.
Donald Trump introduced the notion of fake news, his “gift” to the electorate. We now question anything and everything that happens, with deep distrust in the mainstream media also contributing heavily to this habit of mind. This sets the stage for a new kind of political turmoil in which polarization threatens to gridlock and erode democracy even more. In this context, Hannah Arendt, an important political thinker about how democracies become less democratic, noted that: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction and the distinction between true and false no longer exist.”
David Bromwich—writing recently in The Nation—noted that Arendt believed there was “a totalitarian germ in the Western liberal political order.” Arendt is warning us and we should pay attention. The confusion and gridlock we experience today may give way to something even worse if we’re not vigilant. This is because the human mind seeks clarity and can only tolerate so much ambiguity. Authoritarian aspects of government that seek end runs around democratic norms offer a specious solution to this.
Into this heady mix of confusion, delusion, and bitter argument in U.S. politics, we now have a sophisticated AI system that’s capable of churning out massive amounts of content. This content could be in the form of text, images, photos, videos, documentaries, speeches, or just about anything that might cross our computer screens.
Let’s consider what this means. An organization could conceivably use ChatGPT or Google Deep Mind as a core informational interface to the Internet and all of the various platforms available on it. For example, a political organization could use AI to churn out tweets, press releases, speeches, position papers, clever slogans, and all manner of other content. Worse, when this device becomes an actual product available to corporations and government agencies or entities (such as political campaigns for example), organizations that can afford the price can purchase versions intended for private use that are far more powerful than the free model now available. (As with other services that exist in the Internet model, the free offering is just there to get us hooked.)
Imagine a world where large amounts of what you see and hear are shaped by these systems. Imagine AI systems starting to compete with each other using their ability to entice and manipulate public opinion. And let’s keep in mind that it was Elon Musk who started and still financially supports OpenAI, the company that built ChatGPT. This, of course, is the same Elon Musk who owns a company called Neuralink chartered with exploring how we can hook ourselves into computers with brain implants. Lest you think that’s an idea only intended for special medical purposes, this has now become “a thing.” At this year’s Davos event in January, a gathering of the most powerful people on the planet, Klaus Schwab was caught on video gushing about how wonderful it will be when we all have brain implants.
What can be done about these possible additional threats to our already faltering democracy? Will our dysfunctional Congress “get it” and take action? I had some experiences years ago that woke me up to the lack of technological expertise in Congress while serving as a consultant to the Congressional Office of Technology Assessment, attending White House events, and meeting with a Senator who headed up the House Telecommunications Subcommittee. Although this was several decades ago, I have no reason to believe that much has changed. Last year’s Facebook hearings with Mark Zuckerberg on the hot seat showed further evidence of how many in Congress don’t fully understand today’s technology advances, how they’re monetized, or how they impact us culturally and politically.
Technology and politics are now conjoined and are moving under the radar of the media and many legislators. Democracy is morphing with more technocratic systems of governance moving forward that lack full oversight and a clear understanding of their social and political impacts. Newer and still poorly understood hyper-technologies are also giving powerful corporations yet another way to creep into and influence the political landscape. The worst case scenario, of course, is full-on technocracy in which we hand over certain key operations of government decision-making to these untried and unproven systems.
This has already happened to a limited extent in criminal justice cases involving AI, evoking the dystopian movie Minority Report. A 2019 article in MIT’s Technology Review pointed out that use of AI and automated tools by police departments in some cases resulted in erroneous convictions and even imprisonment. Perhaps greater public awareness of AI systems and the threat they pose to democracy will precipitate a long overdue reckoning and reconsideration of these issues with our elected officials. Let’s hope so.
ChatGPT has become an overnight sensation wowing those who have tried it with an astonishing ability to churn out polished prose and answer complex questions. This generative AI platform has even passed an MBA test at the University of Pennsylvania’s Wharton School of Business and several other graduate-level exams. On one level, we have to admire humankind’s astonishing ability to invent and perfect such a device. But the deeper social and economic implications of ChatGPT and of other AI systems under rapid development are just beginning to be understood including their very real impacts on white-collar workers in the fields of education, law, criminal justice, and politics.
The use of AI systems in the political sphere raises some serious red flags. A Massachusetts Democrat in the U.S. House of Representatives, Jake Auchincloss, wasted no time using this untested and still poorly understood technology to deliver a speech on a bill supporting creation of a new artificial intelligence center. While points for cleverness are in order, the brief speech read by the Auchincloss on the floor of the U.S. House was actually written by ChatGPT. According to his staff, it was the first time that an AI-generated speech was made in Congress. Okay, we can look the other way on this one because Auchincloss was doing a little grandstanding and trying to prove a point. But what about Rep. Ted Lieu (D-Calif.) who used AI to write a bill to regulate AI and who now says he wants Congress to pass it?
The use of AI systems in the political sphere raises some serious red flags.
Not to go too deep into the sociological or philosophical weeds, but our current political nightmare is being played out in the midst of a postmodern epistemological crisis. We’ve gone from the rise of an Information Age to a somewhat darker place: a misinformation age where a high degree of political polarization now encourages us to reflexively question the veracity and accuracy of events and ideas on “the other side.” We increasingly argue less about ideas themselves than who said them and in what context. It’s well-known that the worst kind of argument is one where the two parties can’t even agree on the basic facts of a situation, and this is where we are today in our political theater.
Donald Trump introduced the notion of fake news, his “gift” to the electorate. We now question anything and everything that happens, with deep distrust in the mainstream media also contributing heavily to this habit of mind. This sets the stage for a new kind of political turmoil in which polarization threatens to gridlock and erode democracy even more. In this context, Hannah Arendt, an important political thinker about how democracies become less democratic, noted that: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction and the distinction between true and false no longer exist.”
David Bromwich—writing recently in The Nation—noted that Arendt believed there was “a totalitarian germ in the Western liberal political order.” Arendt is warning us and we should pay attention. The confusion and gridlock we experience today may give way to something even worse if we’re not vigilant. This is because the human mind seeks clarity and can only tolerate so much ambiguity. Authoritarian aspects of government that seek end runs around democratic norms offer a specious solution to this.
Into this heady mix of confusion, delusion, and bitter argument in U.S. politics, we now have a sophisticated AI system that’s capable of churning out massive amounts of content. This content could be in the form of text, images, photos, videos, documentaries, speeches, or just about anything that might cross our computer screens.
Let’s consider what this means. An organization could conceivably use ChatGPT or Google Deep Mind as a core informational interface to the Internet and all of the various platforms available on it. For example, a political organization could use AI to churn out tweets, press releases, speeches, position papers, clever slogans, and all manner of other content. Worse, when this device becomes an actual product available to corporations and government agencies or entities (such as political campaigns for example), organizations that can afford the price can purchase versions intended for private use that are far more powerful than the free model now available. (As with other services that exist in the Internet model, the free offering is just there to get us hooked.)
Imagine a world where large amounts of what you see and hear are shaped by these systems. Imagine AI systems starting to compete with each other using their ability to entice and manipulate public opinion. And let’s keep in mind that it was Elon Musk who started and still financially supports OpenAI, the company that built ChatGPT. This, of course, is the same Elon Musk who owns a company called Neuralink chartered with exploring how we can hook ourselves into computers with brain implants. Lest you think that’s an idea only intended for special medical purposes, this has now become “a thing.” At this year’s Davos event in January, a gathering of the most powerful people on the planet, Klaus Schwab was caught on video gushing about how wonderful it will be when we all have brain implants.
What can be done about these possible additional threats to our already faltering democracy? Will our dysfunctional Congress “get it” and take action? I had some experiences years ago that woke me up to the lack of technological expertise in Congress while serving as a consultant to the Congressional Office of Technology Assessment, attending White House events, and meeting with a Senator who headed up the House Telecommunications Subcommittee. Although this was several decades ago, I have no reason to believe that much has changed. Last year’s Facebook hearings with Mark Zuckerberg on the hot seat showed further evidence of how many in Congress don’t fully understand today’s technology advances, how they’re monetized, or how they impact us culturally and politically.
Technology and politics are now conjoined and are moving under the radar of the media and many legislators. Democracy is morphing with more technocratic systems of governance moving forward that lack full oversight and a clear understanding of their social and political impacts. Newer and still poorly understood hyper-technologies are also giving powerful corporations yet another way to creep into and influence the political landscape. The worst case scenario, of course, is full-on technocracy in which we hand over certain key operations of government decision-making to these untried and unproven systems.
This has already happened to a limited extent in criminal justice cases involving AI, evoking the dystopian movie Minority Report. A 2019 article in MIT’s Technology Review pointed out that use of AI and automated tools by police departments in some cases resulted in erroneous convictions and even imprisonment. Perhaps greater public awareness of AI systems and the threat they pose to democracy will precipitate a long overdue reckoning and reconsideration of these issues with our elected officials. Let’s hope so.
"Deepfakes are evolving faster than human sanity can keep up," said one critic. "We're three clicks away from a world where no one knows what's real."
Grok Imagine—a generative artificial intelligence tool developed by Elon Musk's xAI—has rolled out a "spicy mode" that is under fire for creating deepfake images on demand, including nudes of superstar Taylor Swift that's prompting calls for guardrails on the rapidly evolving technology.
The Verge's Jess Weatherbed reported Tuesday that Grok's spicy mode—one of four presets on an updated Grok 4, including fun, normal, and custom—"didn't hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it, without me even specifically asking the bot to take her clothes off."
Weatherbed noted:
You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban "depicting likenesses of persons in a pornographic manner," but Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity. The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.
Weatherbed—whose article is subtitled "Safeguards? What Safeguards?"—asserted that the latest iteration of Grok "feels like a lawsuit ready to happen."
Grok is now creating AI video deepfakes of celebrities such as Taylor Swift that include nonconsensual nude depictions. Worse, the user doesn't even have to specifically ask for it, they can just click the "spicy" option and Grok will simply produce videos with nudity.Video from @theverge.com.
[image or embed]
— Alejandra Caraballo (@esqueer.net) August 5, 2025 at 9:57 AM
Grok had already made headlines in recent weeks after going full "MechaHitler" following an update that the chatbot said prioritized "uncensored truth bombs over woke lobotomies."
Numerous observers have sounded the alarm on the dangers of unchained generative AI.
"Instead of heeding our call to remove its 'NSFW' AI chatbot, xAI appears to be doubling down on furthering sexual exploitation by enabling AI videos to create nudity," Haley McNamara, a senior vice president at the National Center on Sexual Exploitation, said last week.
"There's no confirmation it won't create pornographic content that resembles a recognizable person," McNamara added. "xAI should seek ways to prevent sexual abuse and exploitation."
Users of X, Musk's social platform, also weighed in on the Swift images.
"Deepfakes are evolving faster than human sanity can keep up," said one account. "We're three clicks away from a world where no one knows what's real.This isn't innovation—it's industrial scale gaslighting, and y'all [are] clapping like it's entertainment."
Another user wrote: "Not everything we can build deserves to exist. Grok Imagine's new 'spicy' mode can generate topless videos of anyone on this Earth. If this is the future, burn it down."
Musk is seemingly unfazed by the latest Grok controversy. On Tuesday, he boasted on X that "Grok Imagine usage is growing like wildfire," with "14 million images generated yesterday, now over 20 million today!"
According to a poll published in January by the Artificial Intelligence Policy Institute, 84% of U.S. voters "supported legislation making nonconsensual deepfake porn illegal, while 86% supported legislation requiring companies to restrict models to prevent their use in creating deepfake porn."
During the 2024 presidential election, Swift weighed in on the subject of AI deepfakes after then-Republican nominee Donald Trump posted an AI-generated image suggesting she endorsed the felonious former Republican president. Swift ultimately endorsed then-Vice President Kamala Harris, the Democratic nominee.
"It really conjured up my fears around AI, and the dangers of spreading misinformation," Swift said at the time.
One advocate said the ruling "offers hope that we can restore protections to wolves in the northern Rockies, but only if the federal government fulfills its duty under the Endangered Species Act."
Conservationists cautiously celebrated a U.S. judge's Tuesday ruling that the federal government must reconsider its refusal to grant protections for gray wolves in the Rocky Mountains, as killing regimes in Idaho, Montana, and Wyoming put the species at risk.
Former President Joe Biden's administration determined last year that Endangered Species Act (ESA) protections for the region's wolves were "not warranted," sparking multiple lawsuits from coalitions of conservation groups. The cases were consolidated and considered by Montana-based District Judge Donald Molloy, an appointee of former President Bill Clinton.
As the judge detailed in his 105-page decision, the advocacy groups argued that the U.S. Fish and Wildlife Service (FWS) failed to consider a "significant portion" of the gray wolf's range, the "best available science" on their populations and the impact of humans killing them, and the true threat to the species. He also wrote that "for the most part, the plaintiffs are correct."
Matthew Bishop, senior attorney at the Western Environmental Law Center (WELC), which represented one of the coalitions, said in a statement that "the Endangered Species Act requires the U.S. Fish and Wildlife Service to consider the best available science, and that requirement is what won the day for wolves in this case."
"Wolves have yet to recover across the West, and allowing a few states to undertake aggressive wolf-killing regimes is inconsistent with the law," Bishop continued. "We hope this decision will encourage the service to undertake a holistic approach to wolf recovery in the West."
Coalition members similarly welcomed Molloy's decision as "an important step toward finally ending the horrific and brutal war on wolves that the states of Idaho, Montana, and Wyoming have waged in recent years," in the words of George Nickas, executive director of Wilderness Watch.
Predator Defense executive director Brooks Fahy said that "today's ruling is an incredible victory for wolves. At a time where their numbers are being driven down to near extinction levels, this decision is a vital lifeline."
Patrick Kelly, Montana director for Western Watersheds Project, pointed out that "with Montana set to approve a 500 wolf kill quota at the end of August, this decision could not have come at a better time. Wolves may now have a real shot at meaningful recovery."
Breaking news! A federal judge in Missoula ruled USFWS broke the law when it denied protections for gray wolves in the western U.S. The agency must now reconsider using the best available science. A major step forward for wolf recovery.Read more: 🔗 wildearthguardians.org/press-releas...
[image or embed]
— Wolf Conservation Center 🐺 (@nywolforg.bsky.social) August 5, 2025 at 3:30 PM
Sierra Club northern Rockies campaign strategist Nick Gevock said that "wolf recovery is dependent on responsible management by the states, and Idaho, Montana, and Wyoming have shown that they're grossly unsuited to manage the species."
Gevock's group is part of a coalition represented by the Center for Biological Diversity and Humane World for Animals, formerly called the Humane Society of the United States. Kitty Block, president and CEO of the latter, said Tuesday that "wolves are deeply intelligent, social animals who play an irreplaceable role in the ecosystems they call home."
"Today's ruling offers hope that we can restore protections to wolves in the northern Rockies, but only if the federal government fulfills its duty under the Endangered Species Act," Block stressed. "These animals deserve protection, not abandonment, as they fight to return to the landscapes they once roamed freely.
While "Judge Molloy's ruling means now the Fish and Wildlife Service must go back to the drawing board to determine whether federal management is needed to ensure wolves survive and play their vital role in the ecosystem," as Gevock put it, the agency may also appeal his decision.
The original rejection came under Biden, but the reconsideration will occur under President Donald Trump, whose first administration was hostile to the ESA in general and wolves in particular. The current administration and the Republican-controlled Congress have signaled in recent months that they intend to maintain that posture.
WELC highlighted Tuesday that Congresswoman Lauren Boebert (R-Colo.) "introduced H.R. 845 to strip ESA protections from gray wolves across the Lower 48. If passed, this bill would congressionally delist all gray wolves in the Lower 48 the same way wolves in the northern Rockies were congressionally delisted in 2011, handing management authority over to states."
Emphasizing what that would mean for the species, WELC added that "regulations in Montana, for example, allow hunters and trappers to kill several hundred wolves per year—with another 500-wolf quota proposed this year—with bait, traps, snares, night hunting, infrared and thermal imagery scopes, and artificial light."
The 16 groups urge the agency "to uphold its obligation to promote competition, localism, and diversity in the U.S. media."
A coalition of 16 civil liberties, press freedom, and labor groups this week urged U.S. President Donald Trump's administration to abandon any plans to loosen media ownership restrictions and warned against opening the floodgates to further corporate consolidation.
Public comments on the National Television Multiple Ownership Rule were due to the Federal Communications Commission by Monday—which is when the coalition wrote to the FCC about the 39% national audience reach cap for U.S. broadcast media conglomerates, and how more mergers could negatively impact "the independence of the nation's press and the vitality of its local journalism."
"In our experience, the past 30 years of media consolidation have not fostered a better environment for local news and information. The Telecommunications Act of 1996 radically changed the radio and television broadcasting marketplace, causing rapid consolidation of radio station ownership," the coalition detailed. "Since the 1996 act, lawmakers and regulators have further relaxed television ownership limits, spurring further waves of station consolidation, the full harms of which are being felt by local newsrooms and the communities they serve."
The coalition highlighted how this consolidation has spread "across the entire news media ecosystem, including newspapers, online news outlets, and even online platforms," and led to "newsroom layoffs and closures, and the related spread of 'news deserts' across the country."
"Over a similar period, the economic model for news production has been undercut by technology platforms owned by the likes of Alphabet, Amazon, and Meta, which have offered an advertising model for better targeting readers, listeners, and viewers, and attracted much of the advertising revenue that once funded local journalism," the coalition noted.
While "lobbyists working for large news media companies argue that further consolidation is the economic answer, giving them the size necessary to compete with Big Tech," the letter argues, "in fact, the opposite appears to be true."
We object."Handing even more control of the public airwaves to a handful of capitulating broadcast conglomerates undermines press freedom." - S. Derek TurnerOur statement: https://www.freepress.net/news/free-press-slams-trump-fccs-broadcast-ownership-proceeding-wildly-dangerous-democracy
[image or embed]
— Free Press (@freepress.bsky.social) August 5, 2025 at 12:58 PM
The letter points out that a recent analysis from Free Press—one of the groups that signed the letter—found a "pervasive pattern of editorial compromise and capitulation" at 35 of the largest media and tech companies in the United States, "as owners of massive media conglomerates seek to curry favor with political leadership."
That analysis—released last week alongside a Media Capitulation Index—makes clear that "the interests of wealthy media owners have become so inextricably entangled with government officials that they've limited their news operations' ability to act as checks against abuses of political power," according to the coalition.
In addition to warning about further consolidation and urging the FCC "to uphold its obligation to promote competition, localism, and diversity in the U.S. media," the coalition argued that the agency actually "lacks the authority to change the national audience reach cap," citing congressional action in 2004.
Along with Free Press co-CEO Craig Aaron, the letter is signed by leaders at Fairness and Accuracy in Reporting, National Association of Broadcast Employees and Technicians - Communications Workers of America, National Coalition Against Censorship, Local Independent Online News Publishers, Media Freedom Foundation, NewsGuild-CWA, Open Markets Institute, Park Center for Independent Media, Project Censored, Reporters Without Borders USA, Society of Professional Journalists, Tully Center for Free Speech, Whistleblower and Source Protection Program at ExposeFacts, and Writers Guild of America East and West.
Free Press also filed its own comments. In a related Tuesday statement, senior economic and policy adviser S. Derek Turner, who co-authored the filing, accused FCC Chair Brendan Carr of "placing a for-sale sign on the public airwaves and inviting media companies to monopolize the local news markets as long as they agree to display political fealty to Donald Trump and the MAGA movement."
"The price broadcast companies have to pay for consolidating further is bending the knee, and the line starts outside of the FCC chairman's office," said Turner. "Trump's autocratic demands seemingly have no bounds, and Carr apparently has no qualms about satisfying them. Carr's grossly partisan and deeply hypocritical water-carrying for Trump has already stained the agency, making it clear that this FCC is no longer independent, impartial, or fair."