

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The State Department initiative aims to thwart efforts to weaken US alliances, something President Donald Trump has done repeatedly on his own social media posts.
A leaked diplomatic cable signed by US Secretary of State Marco Rubio instructs American embassies and consulates worldwide to engage in a coordinated campaign to counter foreign propaganda, which the missive defines in part as messaging that seeks to “shift blame to the United States.”
The Guardian, which obtained a copy of the cable, reported on Monday that US State Department employees have been asked to "work alongside the US military’s psychological operations unit to address the problem of rampant disinformation" on social media.
The cable points to the Community Notes feature on Elon Musk's X platform, which allows other X users to provide context or correct false claims on other users' posts, as a particularly useful feature for the US to push back against narratives promoted by foreign governments.
The initiative's main goals are "countering hostile messaging, expanding access to information, exposing adversary behavior, elevating local voices who support American interests, and promoting what it calls 'telling America’s story,'" The Guardian reported.
In explaining the need to the initiative, the State Department cable cited foreign influence campaigns that "seek to shift blame to the United States, sow division among allies, promote alternative worldviews antithetical to America’s interests, and even undermine American economic interests and political freedoms."
The cable did not address social media posts by US President Donald Trump, who has repeated sowed divisions among US allies. On Tuesday, for example, the president once again lashed out at European nations for not helping carry out his unconstitutional war with Iran, telling them to "start learning how to fight for yourself" because "the USA won’t be there to help you anymore, just like you weren’t there for us."
The president's posts have also undermined the country's political freedoms, including multiple instances where he has described US journalists as the "enemy of the people," while pushing for American TV networks to lose their broadcasting licenses if they continue airing negative stories about him and his administration.
The plan to combat foreign influence operations comes as the US has struggled to fight a propaganda battle against Iran, and Trump last month even floated "charges of treason" for journalists who report what he described as "fake news" about the conflict.
Given the speed of AI’s development and its ubiquity, relying on companies to self-regulate is like closing the computer laptop after the deepfakes have been posted.
The explosion of AI into the marketplace has led to fears that workers, including white collar workers, will soon become obsolete; that Big Tech firms will control more and more property including intellectual property; that AI data centers will require so much energy as to overwhelm small communities, raise electricity prices, and accelerate global warming; and that the ongoing gathering of money, power,and software in the hands of tech billionaires will enable them to control political discourse and surveil the masses. Critics rightfully worry about AI upsetting social conventions, invading personal privacy, destroying jobs by making workers redundant, and challenging social mores.
When considered soberly, the risks of AI are the risks that accompany any new technology: reinforced racial bias and discrimination, economic inequality, deskilling of workers, and misinformation and manipulation that reflect existing power structures. Already pervasive society-wide gender and racial biases are reinforced in AI. The demographic of those programming AI systems are overwhelmingly white men, leading to biases in the development of AI tools, cybersecurity systems, policing software, and cameras.
AI has become a powerful force even in the area of pornography, where the dangers that accompany its spread illuminate the risks of the diffusion of AI generally. The shocking impacts include deepfakes (the artificial use of images to embarrass or hurt others) and child abuse. Elon Musk’s “Grok” app is allowing users to undress anyone including minors, while “X” refuses to take action. The American Federation of Teachers left “X” because of its dissemination of “sickening” images of children in various states of nudity.
These worries are playing out against the backdrop of the Epstein sexual predator scandal that also involves modern technology, wealth, and privileged men. It is reflected in the unfettered development of pornographic applications, too many of which thrive on sexual exploitation of women and children. In the US the determination of President Donald Trump to avoid regulations of AI at the urgings of industry thus becomes a greater danger. The spread of risky AI pornography results not from the unfettered prurient interests of purveyors and users, nor from a lack of moral safeguards, but from a failure of governance and unwillingness to stifle profit in the name of free speech.
The exploitation of women’s sexual images without consent, coupled with the lack of robust oversight or age verification for mainstream platforms, perpetuates a cycle of harm.
In order to exert proper controls on the dark, abusive side of AI porn—and AI generally—we must understand what it is, how it developed, and how it might be controlled. Pornographic content has had a major presence in erotic and bawdy books and magazines over the centuries. You might say it became mainstream with Geoffrey Chaucer’s Canterbury Tales (late 14th century), although the modern notion of pornography arose in the mid-19th century. The internet enabled a pornography boom by bringing it to any computer and eventually to any cell phone. If porn was expensive to produce, it generated high income. This stimulated further development of internet platforms where it is both pervasive and free. Rather than selling copies of videos, industry cleverly embraced online platforms to create multiple income streams through blind links, pop-up windows, pay-per-click ads, and by sharing of traffic with other sites.
AI and such associated technologies as handheld electronic cameras and web pages have transformed the porn industry from being large and studio centered to being a cottage industry for virtually any tube site, small warehouse, or apartment. But Big Tech dominates. Of over 1 billion websites, of which less than 200 million are active, at least 4% are porn related, and perhaps as many as 12%. By usage, even more of the net is related to pornography, perhaps 30% of the internet’s data usage, with raw bandwidth usage six times larger than for Hulu or Youtube. MindGeek, the owner of several of the most visited sites including Pornhub, RedTube, and YouPorn, is a dominant force. Between 2013 and 2019, the number of visits registered in Pornhub grew threefold from 14.7 to 42 billion, and it is increasingly originating from mobile devices; in January 2024 alone there were 11.4 billion mobile visits worldwide.
The majority of users are male.
All of these visits to porn sites generate huge profits, well over $100 billion worldwide annually. For perspective: these profits are greater than those for Apple, GM, and other major corporations. By the 2020s the top porn producing countries were: the United States, at 24.5%; the United Kingdom, 5.5%; and Germany, Brazil, France, and Russia at between 4% and 5%. The vibrant OnlyFans site, in which performers own their own content, reported $7.22 billion in gross revenue in 2024. During the Covid-19 pandemic, as isolated individuals turned to the web for sexual comfort, OnlyFans gross revenue rose 118%, followed by annual increases of 16% and 19% in 2022 and 2023, respectively.
The development of AI-generated pornography moved hand in hand with the rise of generative artificial intelligence. Much of the material is artificial, or at the very least enhanced. Many publicly accessible AI models generate text, audio, and images across the entire human spectrum of activities. They include ChatGPT, Gemini, DeepSeek DALL-E, and Midjourney which have content moderation systems to prevent the creation of sexually explicit material. But a large volume of the output is deepfakes and child pornography, both of which have generated outrage and calls for its control, if not outright illegalization, and its rapid removal from the worldwide web. And moderation works only so far.
As quickly as new AI programs are developed, work-arounds to the restrictions are found. A separate market for so-called unmoderated or uncensored generative AI tools has also emerged which enables production of sexually explicit content through web and app interfaces. As examples: Dreampress.ai and MySpicyVanilla.com prompt erotic stories, while PornPen.ai, Pornderful.ai, Unstability.ai, and other apps enable pornographic images or videos. The exploitation of women’s sexual images without consent, coupled with the lack of robust oversight or age verification for mainstream platforms, perpetuates a cycle of harm.
By now websites dedicated to AI-generated adult content have spread into the mainstream where they may promote predation. They are first of all businesses dedicated to generating market interest and making profit, not in self-regulation. Drawing on huge libraries and data sets, they enable users to customize their preferences for body type; facial features; such enhancements as implants, tattoos, and piercings; kinds of encounters and positions; and fetishes. From the privacy of one’s domain, a user can thereby have sexual encounters, thinking he may do so without endangering others or himself.
Ultimately, however, AI pornography distorts human sexuality, because everything is on demand and seemingly risk free. It trains desire without reciprocity. It erodes the human capacity for negotiation, refusal, and mutual recognition. What looks like personalization of preference is actually the substitution of a screen for a living, feeling autonomous partner. Thus, AI porn is less about sex than about power: It teaches users to expect intimacy without vulnerability and especially without responsibility, and it facilitates abuse of women and girls.
Because of the ease of production, the amorality of website owners, and the lack of regulation, there has been limited progress in fighting deepfakes.
This terrible reality plays out with respect to deepfakes. Deepfakes make it possible for people to create naked photos or videos of someone, then to use the artificial pornography to embarrass, blackmail, or otherwise hurt her (him). “Nudify” sites have proliferated rapidly, allowing millions of people to create nonconsensual images. Apps like DeepSwap and Face Swapping, which enable users to swap out faces in a video with a different face obtained elsewhere, have proliferated since the emergence of generative AI three years ago. Digitally edited pornographic videos featuring the faces of hundreds of non-consenting women get tens of millions of visitors on websites.
Deepfakes are a “new method to deploy gender-based violence and erode women’s autonomy in their on-and-offline world.” In fact, in 2023, 98% of 95,820 deepfakes online were pornographic and 99% of those videos targeted women. To facilitate targeting, AI entrepreneurs created a website, MrDeepFakes, to which altered images have been uploaded for viewing and purchase. Deepfakes may be used as “revenge porn” when a jilted suitor determines to abuse an acquaintance by posting nonconsensual intimate AI images. As Paris Hilton recently testified on Capitol Hill about her experience with a private video gone public: “People called it a scandal. It wasn’t. It was abuse.”
As a result, there has been a sharp increase in crimes targeting children on the internet (online enticement, AI abuse, and trafficking). Reports of generative artificial intelligence (GAI) related to child sexual exploitation have skyrocketed from 6,835 reports to 440,419 in the last year alone. In the past few years in the US, 93.5% of individuals sentenced for sexual abuse were men, 67% of the cases involving child pornography were white men, and 95% were US citizens. In February 2025 Europol busted a criminal gang that was distributing AI-generated images of child sexual abuse online. Abusive behavior extends to secondary schools where students produce deepfake nude photos of their classmates with the help of AI. Boys are much more likely to generate a deep nude photo than girls. But because of the ease of production, the amorality of website owners, and the lack of regulation, there has been limited progress in fighting deepfakes.
In response to public outcry over perceived dangers of recombinant DNA research in the 1970s, the Cambridge, Massachusetts City Council voted to restrict work at MIT and Harvard laboratories. The vote, and concerns of molecular biologists themselves, led the burgeoning rDNA industry to adopt safety regulations on its own. In AI, too, the industry is by and large self-regulated to guard against misuse, disarm public interference, and ensure booming business opportunities. However, given the speed of AI’s development and its ubiquity, such a decision to self-regulate is like closing the computer laptop after the deepfakes have been posted.
A number of social media platforms and AI companies voluntarily introduced regulations and standards to limit hate speech, and combat incitement to violence against specific groups, genders, and orientations. More recently, many of these safeguards have been removed in the name of free speech and the right of the public to information. This has resulted in an explosion in hate speech, racism, and deepfakes. For example, after its acquisition by Elon Musk, Twitter took longer to review hateful content and remove it, an unsurprising result given that Musk fired thousands of employees who were responsible for moderation. He also has a misogynist view of women (whom he called “womb-creatures”), and he publicly saluted the Nazis who, he believes, merit a platform. Homophobic, transphobic, and racist hate speech on Twitter increased 50% under his ownership.
Similarly, in keeping with his quasi-libertarian views of free speech, Musk has refused to reign in Grok, his AI tool. Grok has a “Spicy” option that is being used to produce disgusting photographs of women and children in sexually compromising, explicit, and abusive situations. X officially allows pornographic content on its platform, too, but says it will block adult and violent posts from being seen by users who are under 18 or who do not opt in to see it. Shockingly, US Defense Secretary Pete Hegseth plans to integrate Grok into Pentagon networks, including classified systems, as part of a broader initiative to incorporate AI technology across the military. Does Hegseth have in mind the production of military deepfakes?
Having captured Trump’s fumbling mind, the massive AI industry has convinced the president to oppose meaningful local, state, and national laws to avoid “onerous” interference with commerce that may slow innovation. This lack of regulation has spilled over into AI and pornography. The technological billionaires who promote and sell AI applications in pornography may not understand or care about the abuse and suffering of women and children that has resulted from their apps. After all, Elon Musk, Bill Gates, Donald Trump, Howard Lutnik, Sergey Brin, Reid Hoffman, and many more techno billionaires in government and industry have been linked directly to the Epstein scandal. There is no suggestion of any wrongdoing in the heavily redacted files released by the US Department of Justice that these men committed sex crimes. But what do these contacts say about their attitudes toward women and children and what has been the result?
The Internet Watch Foundation (IWF) has found thousands of AI-generated pictures online involving the sexual abuse of children. Such groups as the Sexual Violence Prevention Association have demanded stricter controls on AI image tools, swift takedown mechanisms, and legal action against those generating and circulating abusive content. But the number of realistic images, nearly all of which involve girls, skyrockets annually. Perpetrators easily download open-source AI models to their computers and quickly evade safeguards.
Confronting the purveyors of abusive AI and fighting immoral profit works.
Deepfakes might be addressed through such regulatory initiatives as the California AI Transparency Act, the Take It Down Act, the EU AI Act, and the UK Online Safety Act 2023. In 2024 the Czech Justice Ministry acted to amend a law that would make deepfake porn a criminal offense and make it easier for victims to defend themselves. The European Union has taken steps to address cyberstalking, online harassment, and incitement to hatred and violence. Unfortunately, enforcement remains inconsistent. For example, Scotland’s 2021 hate speech law criminalizes incitement to prejudice hatred, but excludes misogynistic hate.
Confronting the purveyors of abusive AI and fighting immoral profit works. Age and prior consent verification and other checks are always technically feasible to prevent abusive AI porn. Listening to pressure from anti-porn advocacy groups, Visa and Mastercard finally refused to accept payments from Pornhub, the world’s leading porn site, after a New York Times report that documented abuse and rape. This did more to slow Pornhub’s damaging practices than did years of content moderation. Ultimately, however, platforms face little accountability for hosting harmful content or for profiting from it.
CEO of OpenAI, Sam Altman, believes in treating “adult users like adults” with some age-gating, but little control. Many apps and sites hire armies of content moderators to catch illegal and offensive content. But we have seen how Musk’s decision to fire moderators led to an increase in violent hate speech. OpenAI thus is actively recruiting a “head of preparedness”—a well-paid human—to address the “real challenges” of AI models. He had in mind the “potential impact of models on mental health” and other models that can find “critical vulnerabilities” that attackers intend to use for harm. Altman’s announcement followed growing concern over the impact of AI chatbots on mental health, with lawsuits alleging that OpenAI’s ChatGPT “reinforced users’ delusions, increased their social isolation, and led some individuals to suicide.”
Like any other technological advance whose promoters have promised revolutionary changes in society and whose detractors have worried about the potential for moral, cultural, and social collapse, AI, in all of its applications, is a human technology, one that will be embraced and applied in human ways. The internet gives an open microphone to voices of anger and reason, to racism and equality, to raw pornographic images and erotic art with few filters. The Luddites of the early 19th century, the factory workers of the mid-20th century, and the more modern critics of robotics have long worried about their inevitable replacement by machines. Now AI has replaced pornographic models. Surely, the next steps require human analysis and intervention that machines, AI, and its billionaire owners can never provide.
"Billionaires are on track to break their $1 billion midterm spending record," said Americans for Tax Fairness.
Just 50 billionaire families in the United States have already dumped more than $430 million into the 2026 midterms, with the vast majority of the money flowing to Republican candidates and right-wing organizations such as MAGA Inc.—a super PAC aligned with President Donald Trump.
The progressive advocacy group Americans for Tax Fairness (ATF) released an analysis on Wednesday examining the most recent Federal Election Commission data, which underscores increasingly aggressive billionaire efforts to use their immense wealth to secure their favored political outcomes. In the 2024 federal elections, billionaires accounted for nearly 20% of all donations.
Elon Musk, the richest man in the world, tops the list of 2026 campaign spenders so far, donating roughly $71 million—including $10 million in support of a pro-Trump candidate running to succeed Sen. Mitch McConnell (R-Ky.).
Behind Musk is businessman Jeff Yass, a relatively low-profile billionaire who has spent millions in recent years promoting school privatization. Yass has so far spent $55 million in the 2026 midterm cycle, $16 million of which went to MAGA Inc.—the largest recipient of the billionaire's donations.
Combined, the 50 top-spending billionaire families—which ATF describes as "modern-day royalty"—have poured $433 million into the 2026 midterms to date.
"Billionaires are on track to break their $1 billion midterm spending record," ATF noted on social media, referring to the 2022 midterms. "The spending is projected to grow exponentially as November approaches."

ATF published its analysis days ahead of the latest round of nationwide "No Kings" protests against the Trump administration this coming Saturday, March 28.
“The American people reject kings, political or financial,” David Kass, executive director of ATF, said in a statement on Wednesday. “Whether it’s an out-of-control chief executive in the White House or a billionaire wielding his huge fortune to influence elections, anti-democratic behavior is anathema to the American public."
"As we approach the 250th anniversary of our independence from the British monarchy," Kass added, "it’s more important than ever that we reform our campaign-finance and tax laws so that no billionaire can purchase a crown.”
ATF found that nearly 80% of top billionaire families' 2026 midterm spending—$344.3 million of the $433 million total—has gone to Republicans and GOP organizations, with the pro-Trump MAGA Inc. super PAC receiving $89 million, far more than any other group.
Four of the top five recipients of midterm cash from the nation's richest billionaire are pro-Republican PACs.
"Republicans and conservatives receive the lion’s share of billionaire financial support because it is the nation’s right-wing that works to ensure the wealthiest families get to keep and expand their fortunes, such as through the GOP tax-and-spending law enacted last year," ATF noted.