SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
Recent articles and books about artificial intelligence offer images of the future that align like iron filings around two magnetic poles—utopia and apocalypse.
On one hand, AI is said to be leading us toward a perfect future of ease, health, and broadened understanding. We, aided by our machines and their large language models (LLMs), will know virtually everything and make all the right choices to usher in a permanent era of enlightenment and plenty. On the other hand, AI is poised to thrust us into a future of unemployment, environmental destruction, and delusion. Our machines will gobble scarce resources while churning out disinformation and making deadly weapons that AI agents will use to wipe us out once we’re of no further use to them.
Utopia and apocalypse have long exerted powerful pulls on human imagination and behavior. (My first book, published in 1989 and updated in 1995, was Memories and Visions of Paradise: Exploring the Universal Myth of a Lost Golden Age; it examined the history and meaning of the utopian archetype.) New technologies tend to energize these two polar attractors in our collective psyche because toolmaking and language are humanity’s two superpowers, which have enabled our species to take over the world, while also bringing us to a point of existential peril. New technologies increase some people’s power over nature and other people, producing benefits that, mentally extrapolated forward in time, encourage expectations of a grand future. But new technologies also come with costs (resource depletion, pollution, increased economic inequality, accidents, and misuse) that evoke fears of an ultimate reckoning. Language supercharges our toolmaking talent by enabling us to learn from others; it is also the vehicle for formulating and expressing our hopes and fears. AI, because it is both technological and linguistic, and because it is being adopted at a frantic pace and so disruptively, is especially prone to triggering the utopia-apocalypse reflex.
Messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.
We humans have been ambivalent about technology at least since our adoption of writing. Tools enable us to steal fire from the gods, like the mythical Prometheus, whom the gods punished with eternal torment; they are the wings of Icarus, who flies too close to the sun and falls to his death. AI promises to make technology autonomously intelligent, thus calling to mind still another cautionary tale, “The Sorcerer’s Apprentice.”
What could go right—or wrong? After summarizing both the utopian and apocalyptic visions for AI, I’ll explore two questions: first, how do these extreme visions help or mislead us in our attempts to understand AI? And second, whom do these visions serve? As we’ll see, there are some early hints of AI’s ultimate limits, which suggest a future that doesn’t align well with many of the highest hopes or deepest fears for the new technology.
As a writer, I generally don’t deliberately use AI. Nevertheless, in researching this article, I couldn’t resist asking Google’s free AI Overview, “What is the utopian vision for AI?” This came back a fraction of a second later:
The utopian vision for AI envisions a future where AI seamlessly integrates into human life, boosting productivity, innovation, and overall well-being. It’s a world where AI solves complex problems like climate change and disease, and helps humanity achieve new heights.
Google Overview’s first sentence needs editing to remove verbal redundancy (vision, envisions), but AI does succeed in cobbling together a serviceable summary of its promoters’ dreams.
The same message is on display in longer form in the article “Visions of AI Utopia” by Future Sight Echo, who informs us that AI will soften the impacts of economic inequality by delivering resources more efficiently and “in a way that is dynamic and able to adapt instantly to new information and circumstances.” Increased efficiency will also reduce humanity’s impact on the environment by minimizing energy requirements and waste of all kinds.
But that’s only the start. Education, creativity, health and longevity, translation and cultural understanding, companionship and care, governance and legal representation—all will be revolutionized by AI.
There is abundant evidence that people with money share these hopes for AI. The hottest stocks on Wall Street (notably Nvidia) are AI-related, as are many of the corporations that contribute significantly to the NPR station I listen to in Northern California, thereby gaining naming rights at the top of the hour.
Capital is being shoveled in the general direction of AI so rapidly (roughly $300 billion just this year, in the U.S. alone) that, if its advertised potential is even half believable, we should all rest assured that most human problems will soon vanish.
Or will they?
Strangely, when I initially asked Google’s AI, “What is the vision for AI apocalypse?”, its response was, “An AI Overview is not available for this search.” Maybe I didn’t word my question well. Or perhaps AI sensed my hostility. Full disclosure: I’ve gone on record calling for AI to be banned immediately. (Later, AI Overview was more cooperative, offering a lengthy summary of “common themes in the vision of an AI apocalypse.”) My reason for proposing an AI ban is that AI gives us humans more power, via language and technology, than we already have; and that, collectively, we already have way too much power vis-à-vis the rest of nature. We’re overwhelming ecosystems through resource extraction and waste dumping to such a degree that, if current trends continue, wild nature may disappear by the end of the century. Further, the most powerful humans are increasingly overwhelming everyone else, both economically and militarily. Exerting our power more intelligently probably won’t help, because we’re already too smart for our own good. The last thing we should be doing is to cut language off from biology so that it can exist entirely in a simulated techno-universe.
Let’s be specific. What, exactly, could go wrong because of AI? For starters, AI could make some already bad things worse—in both nature and society.
Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.
There are many ways in which humanity is already destabilizing planetary environmental systems; climate change is the way that’s most often discussed. Through its massive energy demand, AI could accelerate climate change by generating more carbon emissions. According to the International Energy Agency, “Driven by AI use, the U.S. economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals.” The world also faces worsening water shortages; AI needs vast amounts. Nature is already reeling from humanity’s accelerating rates of resource extraction and depletion. AI requires millions of tons of copper, steel, cement, and other raw materials, and suppliers are targeting Indigenous lands for new mines.
We already have plenty of social problems, too, headlined by worsening economic inequality. AI could widen the divide between rich and poor by replacing lower-skilled workers with machines while greatly increasing the wealth of those who control the technology. Many people worry that corporations have gained too much political influence; AI could accelerate this trend by making the gathering and processing of massive amounts of data on literally everyone cheaper and easier, and by facilitating the consolidation of monopolies. Unemployment is always a problem in capitalist societies, but AI threatens quickly to throw millions of white-collar workers off payrolls: Anthropic’s CEO Dario Amodei predicts that AI could eliminate half of entry-level white-collar jobs within five years, while Bill Gates forecasts that only three job fields will survive AI—energy, biology, and AI system programming.
However, the most horrific visions for AI go beyond just making bad things worse. The title of a recent episode of The Bulwark Podcast, “Will Sam Altman and His AI Kill Us All?”, states the worst-case scenario bluntly. But how, exactly, could AI kill us all? One way is by automating military decisions while making weapons cheaper and more lethal (a recent Brookings commentary was titled, “How Unchecked AI Could Trigger a Nuclear War”). Veering toward dystopian sci-fi, some AI philosophers opine that the technology, once it’s significantly smarter than people, might come to view biological humans as pointless wasters of resources that machines could use more efficiently. At that point, AI could pursue multiple pathways to terminate humanity.
I don’t know the details of how AI will unfold in the months and years to come. But the same could be said for AI industry leaders. They certainly understand the technology better than I do, but their AI forecasts may miss a crucial factor. You see, I’ve trained myself over the years to look for limits in resources, energy, materials, and social systems. Most people who work in the fields of finance and technology tend to ignore limits, or even to believe that there are none. This leads them to absurdities, such as Elon Musk’s expectation of colonizing Mars. Earth is finite, humans will be confined to this planet forever, and therefore lots of things we can imagine doing just won’t happen. I would argue that discussions about AI’s promise and peril need a dose of limits awareness.
Arvind Narayanan and Sayash Kapoor, in an essay titled “AI Is Normal Technology,” offer some of that awareness. They argue that AI development will be constrained by the speed of human organizational and institutional change and by “hard limits to the speed of knowledge acquisition because of the social costs of experimentation.” However, the authors do not take the position that, because of these limits, AI will have only minor impacts on society; they see it as an amplifier of systemic risks.
In addition to the social limits Narayanan and Kapoor discuss, there will also (as mentioned above) be environmental limits to the energy, water, and materials that AI needs, a subject explored at a recent conference.
AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now.
Finally, there’s a crucial limit to AI development that’s inherent in the technology itself. Large language models need vast amounts of high-quality data. However, as more information workers are replaced by AI, or start using AI to help generate content (both trends are accelerating), more of the data available to AI will be AI-generated rather than being produced by experienced researchers who are constantly checking it against the real world. Which means AI could become trapped in a cycle of declining information quality. Tech insiders call this “AI model collapse,” and there’s no realistic plan to stop it. AI itself can’t help.
In his article “Some Signs of AI Model Collapse Begin to Reveal Themselves,” Steven J. Vaughan-Nichols argues that this is already happening. There have been widely reported instances of AI inadvertently generating fake scientific research documents. The Chicago Sun-Times recently published a “Best of Summer” feature that included forthcoming novels that don’t exist. And the Trump administration’s widely heralded “Make America Healthy Again” report included citations (evidently AI-generated) for non-existent studies. Most of us have come to expect that new technologies will have bugs that engineers will gradually remove or work around, resulting in improved performance. With AI, errors and hallucination problems may just get worse, in a cascading crescendo.
Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.
What will be the real future of AI? Here’s a broad-brush prediction (details are currently unavailable due to my failure to upgrade my crystal ball’s operating system). Over the next few years, corporations and governments will continue quickly to invest in AI, driven by its ability to cut labor costs. We will become systemically dependent on the technology. AI will reshape society—employment, daily life, knowledge production, education, and wealth distribution. Then, speeding up as it goes, AI will degenerate into a hallucinating, blithering cacophony of little voices spewing nonsense. Real companies, institutions, and households will suffer as a result. Then, we’ll either figure out how to live without AI, or confine it to relatively limited tasks and data sets. America got a small foretaste of this future recently, when Musk-led DOGE fired tens of thousands of federal workers with the expectation of replacing many of them with AI—without knowing whether AI could do their jobs (oops: Thousands are being rehired).
A messy neither-this-nor-that future is not what you’d expect if you spend time reading documents like “AI 2027,” five industry insiders’ detailed speculative narrative of the imminent AI future, which allows readers to choose the story’s ending. Option A, “slowdown,” leads to a future in which AI is merely an obedient, super-competent helper; while in option B, “race,” humanity is extinguished by an AI-deployed bioweapon because people take up land that could be better used for more data centers. Again, we see the persistent, binary utopia-or-apocalypse stereotype, here presented with impressive (though misleading) specificity.
At the start of this article, I attributed AI utopia-apocalypse discourse to a deep-seated tic in our collective human unconscious. But there’s probably more going on here. In her recent book Empire of AI, tech journalist Karen Hao traces polarized AI visions back to the founding of OpenAI by Sam Altman and Elon Musk. Both were, by turns, dreamers and doomers. Their consistent message: We (i.e., Altman, Musk, and their peers) are the only ones who can be trusted to shepherd the process of AI development, including its regulation, because we’re the only ones who understand the technology. Hao makes the point that messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.
Utopia and apocalypse feature prominently in the rhetoric of all cults. It’s no surprise, but still a bit of a revelation, therefore, to hear Hao conclude in a podcast interview that AI is a cult (if it walks, quacks, and swims like a cult... ). And we are all being swept up in it.
So, how should we think about AI in a non-cultish way? In his article, “We Need to Stop Pretending AI Is Intelligent,” Guillaume Thierry, a professor of cognitive neuroscience, writes, “We must stop giving AI human traits.” Machines, even apparently smart ones, are not humans—full stop. Treating them as if they are human will bring dehumanizing results for real, flesh-and-blood people.
The collapse of civilization won’t be AI generated. That’s because environmental-social decline was already happening without any help from LLMs. AI is merely adding a novel factor in humanity’s larger reckoning with limits. In the short run, the technology will further concentrate wealth. “Like empires of old,” writes Karen Hao, “the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.” In the longer run, AI will deplete scarce resources faster.
If AI is unlikely to be the bringer of destruction, it’s just as unlikely to deliver heaven on Earth. Just last week I heard from a writer friend who used AI to improve her book proposal. The next day, I went to my doctor for a checkup, and he used AI to survey my vital signs and symptoms; I may experience better health maintenance as a result. That same day, I read a just-published Apple research paper that concludes LLMs cannot reason reliably. Clearly, AI can offer tangible benefits within some fields of human pursuit. But we are fooling ourselves if we assume that AI can do our thinking for us. If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
I’m not currently in the job market and therefore can afford to sit on the sidelines and cast judgment on AI. For many others, economic survival depends on adopting the new technology. Finding a personal modus vivendi with new tools that may have dangerous and destructive side effects on society is somewhat analogous to charting a sane and survivable daily path in a nation succumbing to authoritarian rule. We all want to avoid complicity in awful outcomes, while no one wants to be targeted or denied opportunity. Rhetorically connecting AI with dictatorial power makes sense: One of the most likely uses of the new technology will be for mass surveillance.
Maybe the best advice for people concerned about AI would be analogous to advice that democracy advocates are giving to people worried about the destruction of the social-governmental scaffolding that has long supported Americans’ freedoms and rights: Identify your circles of concern, influence, and control; scrutinize your sources of information and tangibly support those with the most accuracy and courage, and the least bias; and forge communitarian bonds with real people.
AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now. Human greed and desire for greater control over nature and other people may lead toward paths of short-term gain. But, if you want a good life when all’s said and done, learn to live well within limits. Live with honesty, modesty, and generosity. AI can’t help you with that.
The second Trump administration is deploying new surveillance methods it seeks to extend its authoritarian power. And one key aspect of that project is the consolidation of the personal information of millions of people in a single place.
Sometime in the late 1980s, I was talking with a friend on my landline (the only kind of telephone we had then). We were discussing logistics for an upcoming demonstration against the Reagan administration’s support for the Contras fighting the elected government of Nicaragua. We agreed that, when our call was done, I’d call another friend, “Mary,” to update her on the plans. I hung up.
But before I could make the call, my phone rang.
“Hi, this is Mary,” my friend said.
“Mary! I was just about to call you.”
“But you did call me,” she said.
“No, I didn’t. My phone just rang, and you were on the other end.”
It was pretty creepy, but that was how surveillance worked in the days of wired telephone systems. Whoever was listening in, most likely someone from the local San Francisco Police Department, had inadvertently caused both lines to ring, while preparing to catch my coming conversation with Mary. Assuming they’d followed the law, arranging such surveillance would have involved a number of legal and technical steps, including securing a wiretapping warrant. They’d have had to create a physical connection between their phones and ours, most likely by plugging into the phone company’s central office.
Government surveillance has come a long way since then, both technically and in terms of what’s legally possible in Donald Trump’s United States and under the John Roberts Supreme Court.
Government agencies have many ways of keeping tabs on us today. The advent of cellular technology has made it so much easier to track where any of us have been, simply by triangulating the locations of the cell towers our phones have pinged along the way.
If you watch police procedurals on television (which I admit to doing more than is probably good for me), you’ll see a panoply of surveillance methods on display, in addition to cellular location data. It used to be only on British shows that the police could routinely rely on video recordings as aids in crime solving. For some decades, the Brits were ahead of us in creating a surveillance society. Nowadays, though, even the detectives on U.S. shows like Law and Order SVU (heading for its 27th season) can usually locate a private video camera with a sightline to the crime and get its owner to turn over the digital data.
Facial recognition is another technology you’ll see on police dramas these days. It’s usually illustrated by a five-second interval during which dozens of faces appear briefly on a computer monitor. The sequence ends with a final triumphant flourish—a single face remaining on screen, behind a single flashing word: “MATCH.”
We should probably live as if everything we do, even in supposedly “secure” places (real and virtual), is visible to the Trump regime.
I have no idea whether the TV version is what real facial recognition software actually looks like. What I do know is that it’s already being used by federal agencies like Immigration and Customs Enforcement (ICE) and the FBI, under the auspices of a company called Clearview, which is presently led by Hal Lambert, a big Trump fundraiser. As Mother Jones magazine reports, Clearview has “compiled a massive biometric database” containing “billions of images the company scraped off the internet and social media without the knowledge of the platforms or their users.” The system is now used by law enforcement agencies around the country, despite its well-documented inability to accurately recognize the faces of people with dark skin.
The old-fashioned art of tailing suspects on foot is rapidly giving way to surveillance by drone, while a multitude of cameras at intersections capture vehicle license plates. Fingerprinting has been around for well over a century, although it doesn’t actually work on everyone. Old people tend to lose the ridges that identify our unique prints, which explains why I can’t reliably use mine to open my phone or wake my computer. Maybe now’s my moment to embark on a life of crime? Probably not, though, as my face is still pretty recognizable, and that’s what the Transportation Safety Administration uses to make sure I’m really the person in the photo on my Real ID.
The second Trump administration is deploying all of these surveillance methods and more, as it seeks to extend its authoritarian power. And one key aspect of that project is the consolidation of the personal information of millions of people in a single place.
It’s been thoroughly demonstrated that, despite its name, Elon Musk’s Department of Government Efficiency has been anything but efficient in reducing “waste, fraud, and abuse” in federal spending. DOGE, however, has made significantly more progress in achieving a less well publicized but equally important objective: assembling into a single federal database the personal details of hundreds of millions of individuals who have contact with the government. Such a database would combine information from multiple agencies, including the IRS and the Social Security Administration. The process formally began in March 2025 when, as The New York Times reported, President Trump signed an executive order “calling for the federal government to share data across agencies.” Such a move, as Times reporters Sheera Frenkel and Aaron Krolik note, raises “questions over whether he might compile a master list of personal information on Americans that could give him untold surveillance power.”
In keeping with the fiction that DOGE’s work is primarily focused on cost cutting, Trump labeled his order “Stopping Waste, Fraud, and Abuse by Eliminating Information Silos.” That fiction provided the pretext for DOGE’s demands that agency after agency grant its minions free access to the most private data they had on citizens and noncitizens alike. As The Washington Post reported in early May:
The U.S. DOGE Service is racing to build a single centralized database with vast troves of personal information about millions of U.S. citizens and residents, a campaign that often violates or disregards core privacy and security protections meant to keep such information safe, government workers say.
Worse yet, it will probably be impossible to follow DOGE’s trail of technological mayhem. As the Post reporters explain:
The current administration and DOGE are bypassing many normal data-sharing processes, according to staffers across 10 federal agencies, who spoke on the condition of anonymity out of fear of retribution. For instance, many agencies are no longer creating records of who accessed or changed information while granting some individuals broader authority over computer systems. DOGE staffers can add new accounts and disable automated tracking logs at several Cabinet departments, employees said. Officials who objected were fired, placed on leave or sidelined.
My own union, the American Federation of Teachers, joined a suit to prevent DOGE from seizing access to Social Security data and won in a series of lower courts. However, on May 31, in a 6-3 ruling, the Supreme Court (with the three liberal judges dissenting) temporarily lifted the block imposed by the lower courts until the case comes back to the justices for a decision on its merits. In the meantime, DOGE can have what it wants from the Social Security Administration. And even if the Supreme Court were ultimately to rule against DOGE, the damage will be done. As the president of El Salvador said in response to an entirely different court ruling, “Oopsie. Too late.”
Anyone who’s ever worked with a database, even one with only a few thousand records, knows how hard it is to keep it organized and clean. There’s the problem of duplicate records (multiple versions of the same person or other items). And that’s nothing compared to the problem of combining information from multiple sources. Even the names of the places where data goes (“fields”) will differ from one base to another. The very structures of the databases and how records are linked together (“relationships”) will differ, too. All of this makes combining and maintaining databases a messy and confusing business. Now imagine trying to combine dozens of idiosyncratically constructed ones with information stretching back decades into one single, clean, useful repository of information. It’s a daunting project.
And in the case of Trump’s One Big Beautiful Database, that’s where Peter Thiel’s company Palantir comes in. As The New York Times reported recently, at the urging of Elon Musk and DOGE, Trump turned to Palantir to carry out the vision expressed in his March executive order mentioned above. In fact, according to the Times, “at least three DOGE members formerly worked at Palantir, while two others had worked at companies funded by Peter Thiel, an investor and a founder of Palantir.”
Palantir, named for the “seeing stones” described in J.R.R. Tolkien’s Lord of the Rings, is already at work, providing its data platform Foundry to several parts of the government. According to the Times:
The Trump administration has expanded Palantir’s work across the federal government in recent months. The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon. (This does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent.)
Representatives of Palantir are also speaking to at least two other agencies—the Social Security Administration and the Internal Revenue Service—about buying its technology, according to six government officials and Palantir employees with knowledge of the discussions.
Who is Peter Thiel, Palantir’s co-founder? In addition to being a friend of Musk’s, Thiel was an early Trump supporter among the tech elites of Silicon Valley, donating $1.25 million to his 2016 campaign. He is also credited with shaping the political career of Vice President JD Vance, from his campaign to become a senator to his selection as Trump’s running mate. Thiel is part of a rarified brotherhood of tech and crypto-currency billionaires who share a commitment to a particular project of world domination by a technological elite. (And if that sounds like the raw material for a crazy conspiracy theory, bear with me again here.) Thiel was also an early funder of Clearview, the facial recognition software mentioned earlier.
In hiring Palantir and turning our data over to the company, Trump makes himself a useful tool, along with Vance, in the service of Thiel’s vision—just as he has been to the machinations of Project 2025’s principal author Russell Vought, who has different, but no less creepy dreams of domination.
Thiel and his elite tech bros, including Musk, Internet pioneer and venture capitalist Marc Andreessen, and Clearview founder Hoan Ton-That, share a particular philosophy. Other believers include figures like fervent Trump supporter Steve Bannon and Vice President Vance. This explicitly anti-democratic worldview goes by various names, including the “neo-reactionary movement” and the “Dark Enlightenment.”
Its founder is a software developer and political blogger named Curtis Yarvin, who has advocated replacing a “failed” democratic system with an absolute monarchy. Describing the Dark Enlightenment in The Nation magazine in October 2022, Chris Lehman observed that, in his run for Senate, JD Vance had adopted “a key plank of [Yarvin’s] plan for post-democratic overhaul—the strongman plan to ‘retire all government employees, which goes by the jaunty mnemonic ‘RAGE.’” (Any similarity to Musk’s DOGE is probably not coincidental.)
So, what is the Dark Enlightenment? It’s the negative image of an important intellectual movement of the 17th and18th centuries, the Enlightenment, whose principles formed, among other things, the basis for American democracy. These included such ideas as the fundamental equality of all human beings, the view that government derives its authority from the consent of the governed, and the existence of those “certain unalienable rights” mentioned in the U.S. Declaration of Independence.
Our response must be to oppose Trump’s onrushing version of American fascism as boldly and openly as we can.
The Dark Enlightenment explicitly opposes all of those and more. Lehman put it this way: “As Yarvin envisions it, RAGE is the great purge of the old operating system that clears the path for a more enlightened race of technocrats to seize power and launch the social order on its rational course toward information-driven self-realization.” That purge would necessarily produce “collateral casualties,” which would include “the nexus of pusillanimous yet all-powerful institutions Yarvin has dubbed ‘the Cathedral’—the universities, the elite media, and anything else that’s fallen prey to liberal perfidy.” Of course, we’ve already seen at least a partial realization of just such goals in Trump’s focused attacks on universities, journalists, and that collection of values described as diversity, equity, and inclusion.
On that last point, it should be noted that Yarvin and his followers also tended to be adherents of an “intellectual” current called “human biological diversity” championed by Steven Sailer, another Yarvin acolyte. That phrase has been appropriated by contemporary proponents of what used to be called eugenics, or scientific racism. It’s Charles Murray’s 1994 pseudo-scientific Bell Curve dressed up in high-flown pseudo-philosophy.
However, there’s more to the Dark Enlightenment than authoritarianism and racism. One stream, populated especially by Thiel and other tech bros, has an eschatology of sorts. This theology of the Earth’s end-times holds that elite humans will eventually (perhaps even surprisingly soon) achieve eternal life through physical communion with machines, greatly augmenting their capacities through artificial intelligence. That’s important to them because they’ve given up on the Earth. This planet is already too small and used up to sustain human life for long, they feel. Hence, our human destiny is instead to rule the stars. This is the theology underlying Elon Musk’s hunger for Mars. Anything that stands in the way of such a destiny must and shall be swept away on the tide of a tech bros future. (For an excellent explication of the full worldview shared by such would-be masters of the rest of us—and the rest of the universe as well—take a look at Adam Becker’s new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.)
Back in my own corner of the world, the San Francisco Police Department has come a long way since those ancient days of clumsy phone tapping. Recently, a cryptocurrency billionaire, Chris Larsen, gave the SFPD $9.4 million to upgrade its surveillance tech. They’ll use the money to outfit a new Real Time Investigation Center (RTIC) with all the latest toys. “We’re going to be covering the entire city with drones,” claimed RTIC representative Captain Thomas MacGuire. Imagine my joyful anticipation!
How should defenders of democracy respond to the coming reality of near-constant, real-time government surveillance? We can try to shrink and hide, of course, but that only does their job for them, by driving us into a useless underground. Instead, we should probably live as if everything we do, even in supposedly “secure” places (real and virtual), is visible to the Trump regime. Our response must be to oppose Trump’s onrushing version of American fascism as boldly and openly as we can. Yes, some of us will be harassed, imprisoned, or worse, but ultimately, the only answer to mass surveillance by those who want to be our overlords is open, mass defiance.
"We are concerned that Palantir's software could be used to enable domestic operations that violate Americans' rights."
A group of Democratic lawmakers on Monday pressed the CEO of Palantir Technologies about the company's hundreds of millions of dollars in recent federal contracts and reporting that the big data analytics specialist is helping the government build a "mega-database" of Americans' private information in likely violation of multiple laws.
Citing New York Times reporting from late last month examining the Colorado-based tech giant's hundreds of millions of dollars in new government contracts during the second term of U.S. President Donald Trump, Sen. Ron Wyden (D-Ore.) and Rep. Alexandria Ocasio-Cortez (D-N.Y.) led a letter to Palantir CEO Alex Karp demanding answers regarding reports that the company "is amassing troves of data on Americans to create a government-wide, searchable 'mega-database' containing the sensitive taxpayer data of American citizens."
NEW: It looks like Palantir is helping Trump build a mega-database of Americans' private information so he can target and spy on his enemies, or anyone. @aoc.bsky.social and I are demanding answers directly from Palantir.
[image or embed]
— Senator Ron Wyden (@wyden.senate.gov) June 17, 2025 at 7:10 AM
The letter continues:
According to press reports, Palantir employees have reportedly been installed at the Internal Revenue Service (IRS), where they are helping the agency use Palantir's software to create a "single, searchable database" of taxpayer records. The sensitive taxpayer data compiled into this Palantir database will likely be shared throughout the government regardless of whether access to this information will be related to tax administration or enforcement, which is generally a violation of federal law. Palantir's products and services were reportedly selected for this brazenly illegal project by Elon Musk's Department of Government Efficiency (DOGE).
Several DOGE members are former Palantir employees.
The lawmakers called the prospect of Americans' data being shared across federal agencies "a surveillance nightmare that raises a host of legal concerns, not least that it will make it significantly easier for Donald Trump's administration to spy on and target his growing list of enemies and other Americans."
"We are concerned that Palantir's software could be used to enable domestic operations that violate Americans' rights," the letter states. "Donald Trump has personally threatened to arrest the governor of California, federalized National Guard troops without the consent of the governor for immigration raids, deployed active-duty Marines to Los Angeles against the wishes of local and state officials, condoned violence against peaceful protestors, called the independent press 'the enemy of the people,' and abused the power of the federal government in unprecedented ways to punish people and institutions he dislikes."
"Palantir's troubling assistance to the Trump administration is not limited to its work for the IRS," the letter notes, highlighting the company's role in Immigration and Customs Enforcement's mass deportation efforts and deadly U.S. and allied military operations.
The letter does not mention Palantir's involvement in Project Nimbus, a cloud computing collaboration between Israel's military and tech titans Amazon and Google targeted by the No Tech for Apartheid movement over alleged human rights violations. But the lawmakers did note that companies including IBM, Cisco, Honeywell, and others have been complicit in human rights crimes in countries including Nazi Germany, apartheid South Africa, China, Saudi Arabia, and Egypt.
The lawmakers asked Karp to provide a list of all contracts awarded to Palantir, their dollar amount, the federal agencies involved, whether the company has any "red line" regarding human rights violations, and other information.
In addition to Wyden and Ocasio-Cortez, the letter is signed by Sens. Elizabeth Warren (D-Mass.), Jeff Merkley (D-Ore.), and Ed Markey (D-Mass.), and Reps. Summer Lee (D-Pa.), Jim McGovern (D-Mass.), Sara Jacobs (D-Calif.), Rashida Tlaib (D-Mich.), and Paul Tonko (D-N.Y.).