SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
The public must be vigilant about those who claim vigilance as a mandate without bounds. A republic cannot outsource its conscience to machines and contractors.
The feed has eyes. What you share to stay connected now feeds one of the world’s largest surveillance machines. This isn’t paranoia, it’s policy. You do not need to speak to be seen. Every word you read, every post you linger on, every silence you leave behind is measured and stored. The watchers need no warrant—only your attention.
Each post, like, and photograph you share enters a room you cannot see. The visible audience, friends and followers, is only the front row. Behind them sit analysts, contractors, and automated systems that harvest words at scale. Over the last decade, the federal security apparatus has turned public social media into a continuous stream of open-source intelligence. What began as episodic checks for imminent threats matured into standing watch floors, shared databases, and automated scoring systems that never sleep. The rationale is familiar: national security, fraud prevention, situational awareness. The reality is starker: Everyday conversation now runs through a mesh of government and corporate surveillance that treats public speech, and the behavior around it, as raw material.
You do not need to speak to be seen. The act of being online is enough. Every scroll, pause, and click is recorded, analyzed, and translated into behavioral data. Algorithms study not only what we share but what we read and ignore, and how long our eyes linger. Silence becomes signal, and absence becomes information. The watchers often need no warrant for public content or purchased metadata, only your connection. In this architecture of observation, even passivity is participation.
This did not happen all at once. It arrived through privacy impact assessments, procurement notices, and contracts that layered capability upon capability. The Department of Homeland Security (DHS) built watch centers to monitor incidents. Immigration and Customs Enforcement folded social content into investigative suites that already pull from commercial dossiers. Customs and Border Protection (CBP) linked open posts to location data bought from brokers. The FBI refined its triage flows for threats flagged by platforms. The Department of Defense and the National Security Agency fused foreign collection and information operations with real-time analytics.
Little of this resembles a traditional wiretap, yet the effect is broader because the systems harvest not just speech but the measurable traces of attention. Most of it rests on the claim that publicly available information is fair game. The law has not caught up with the scale or speed of the tools. The culture has not caught up either.
The next turn of the wheel is underway. Immigration and Customs Enforcement plans two round-the-clock social media hubs, one in Vermont and one in California, staffed by private contractors for continuous scanning and rapid referral to Enforcement and Removal Operations. The target turnaround for urgent leads is 30 minutes. That is not investigation after suspicion. That is suspicion manufactured at industrial speed. The new programs remain at the request-for-information stage, yet align with an unmistakable trend. Surveillance shifts from ad hoc to ambient, from a hand search to machine triage, from situational awareness to an enforcement pipeline that links a post to a doorstep.
The line between looking and profiling thins because the input is no longer just what we say but what our attention patterns imply.
Artificial intelligence makes the expansion feel inevitable. Algorithms digest millions of posts per hour. They perform sentiment analysis, entity extraction, facial matching, and network mapping. They learn from the telemetry that follows a user: time on page, scroll depth, replay of a clip, the cadence of a feed. They correlate a pseudonymous handle with a résumé, a family photo, and a travel record. Data brokers fill in addresses, vehicles, and associates. What once took weeks now takes minutes. Scale is the selling point. It is also the danger. Misclassification travels as fast as truth, and error at scale becomes a kind of policy.
George Orwell warned that “to see what is in front of one’s nose needs a constant struggle.” The struggle today is to see how platform design, optimized for engagement, creates the very data that fuels surveillance. Engagement generates signals, signals invite monitoring, and monitoring, once normalized, reshapes speech and behavior. A feed that measures both speech and engagement patterns maps our concerns as readily as our views.
Defenders of the current model say agencies only view public content. That reassurance misses the point. Public is not the same as harmless. Aggregation transforms meaning. When the government buys location histories from data brokers, then overlays them with social content, it tracks lives without ever crossing a courthouse threshold. CBP has done so with products like Venntel and Babel Street, as documented in privacy assessments and Freedom of Information Act releases. A phone that appears at a protest can be matched to a home, a workplace, a network of friends, and an online persona that vents frustration in a late-night post. Add behavioral traces from passive use, where someone lingers and what they never click, and the portrait grows intimate enough to feel like surveillance inside the mind.
The FBI’s posture has evolved as well, particularly after January 6. Government Accountability Office reviews describe changes to how the bureau receives and acts on platform tips, along with persistent questions about the balance between public safety and overreach. The lesson is not that monitoring never helps. The lesson is that systems built for crisis have a way of becoming permanent, especially when they are fed by constant behavioral data that never stops arriving. Permanence demands stronger rules than we currently have.
Meanwhile, the DHS Privacy Office continues to publish assessments for publicly available social media monitoring and situational awareness. These documents describe scope and mitigations, and they reveal how far the concept has stretched. As geospatial, behavioral, and predictive analytics enter the toolkit, awareness becomes analysis, and analysis becomes anticipation. The line between looking and profiling thins because the input is no longer just what we say but what our attention patterns imply.
The First Amendment restrains the state from punishing lawful speech. It does not prevent the state from watching speech at scale, nor does it account for the scoring of attention. That gap produces a chilling effect that is hard to measure yet easy to feel. People who believe they are watched temper their words and their reading. They avoid organizing, and they avoid reading what might be misunderstood. This is not melodrama. It is basic social psychology. Those who already live closer to the line feel the pressure first: immigrants, religious and ethnic minorities, journalists, activists. Because enforcement databases are not neutral, they reproduce historical biases unless aggressively corrected.
Error is not theoretical. Facial recognition has misidentified innocent people. Network analysis has flagged friends and relatives who shared nothing but proximity. A meme or a lyric, stripped of context, can be scored as a threat. Behavioral profiles amplify risk because passivity can be interpreted as intent when reduced to metrics. The human fail-safe does not always work because human judgment is shaped by the authority of data. When an algorithm says possible risk, the cost of ignoring it feels higher than the cost of quietly adding a name to a file. What begins as prudence ends as normalization. What begins as a passive trace ends as a profile.
Fourth Amendment doctrine still leans on the idea that what we expose to the public is unprotected. That formulation collapses when the observer is a system that never forgets and draws inferences from attention as well as expression. Carpenter v. United States recognized a version of this problem for cell-site records, yet the holding has not been extended to the government purchase of similar data from brokers or to the bulk ingestion of content that individuals intend for limited audiences. First Amendment jurisprudence condemns overt retaliation against speakers. It has little to say about surveillance programs that corrode participation, including the act of reading, without ever bringing a case to court. Due process requires notice and an opportunity to contest. There is no notice when the flag is silent and the consequences are dispersed across a dozen small harms, each one deniable. There is no docket for the weight assigned to your pauses.
Wendell Phillips wrote, “Eternal vigilance is the price of liberty.” The line is often used to defend surveillance. It reads differently from the other side of the glass. The public must be vigilant about those who claim vigilance as a mandate without bounds. A republic cannot outsource its conscience to machines and contractors.
You cannot solve a policy failure with personal hygiene, but you can buy time. Treat every post as a public record that might be copied, scraped, and stored. Remove precise locations from images. Turn off facial tagging and minimize connections between accounts. Separate roles. If you organize, separate that work from family and professional identities with different emails, phone numbers, and sign ins. Use two-factor authentication everywhere. Prefer end-to-end encrypted tools like Signal for sensitive conversations. Scrub photo metadata before upload. Search your own name and handles in a private browser, then request removal from data-broker sites. Build a small circle that helps one another keep settings tight and recognize phishing and social engineering. These habits are not retreat. They are discipline.
The right to be unobserved is not a luxury. It is the quiet foundation of every other liberty.
Adopt the same care for reading as for posting. Log out when you can, block third-party trackers, limit platform time, and assume that dwell time and scroll depth are being recorded. Adjust feed settings to avoid autoplay and personalized tracking where possible. Use privacy-respecting browsers and extensions that reduce passive telemetry. Small frictions slow the flow of behavioral data that feeds automated suspicion.
Push outward as well. Read the transparency reports that platforms publish. They reveal how often governments request data and how often companies comply. Support groups that litigate and legislate for restraint, including the Electronic Frontier Foundation, the Brennan Center for Justice, and the Center for Democracy and Technology. Demand specific reforms: warrant requirements for government purchase of location and browsing data, public inventories of social media monitoring contracts and tools, independent audits of watch centers with accuracy and bias metrics, and accessible avenues for redress when the system gets it wrong. Insist on disclosure of passive telemetry collection and retention, not only subpoenas for content.
The digital commons was built on a promise of connection. Surveillance bends that commons toward control. It does so quietly, through dashboards and metrics that reward extraction of both speech and attention. The remedy begins with naming what has happened, then insisting that the rules match the power of the tools. A healthy public sphere allows risk. It tolerates anger and error. It places human judgment above automated suspicion. It restores the burden of proof to the state. It recognizes that attention is speech by another name, and that freedom requires privacy in attention as well as privacy in voice.
You do not need to disappear to stay free. You need clarity, patience, and a stubborn loyalty to truth in a time that rewards distraction. The watchers will say the threat leaves no choice, that vigilance demands vision turned outward. History says freedom depends on the courage to look inward first. The digital world was built as a commons, a place to connect and create, yet it is becoming a hall of mirrors where every glance becomes a record and every silence a signal. Freedom will not survive by accident. It must be practiced—one mindful post, one untracked thought, one refusal to mistake visibility for worth. The right to be unobserved is not a luxury. It is the quiet foundation of every other liberty. Guard even the silence, for in the end it may be the only voice that still belongs to you.
Trump's new executive order on homelessness is not a departure from policy failure. It is the logical continuation of a governance model that confuses erasure with resolution.
There are words that live quietly in the margins of law, waiting for the right conditions to become instruments of control. Vagrancy is one of them. It does not name a crime so much as a condition—a presence deemed out of place, a body detached from property, purpose, or permission. It has always been a word that grants the state an elastic mandate: to sweep, to detain, to erase.
Its history is older than this country. In 14th-century England, following the Black Death, the ruling class faced a labor shortage that briefly shifted the balance of power toward the working poor. Rather than negotiate, they legislated. A series of statutes criminalized idleness and movement, branding those who wandered without employer or land as enemies of order. The offense was not what they did—it was that they could not be accounted for. Vagrancy became a pretext for containment, a tool to bind the body to power, and a signal that survival outside sanctioned structures would not be tolerated.
The word arrived in the Americas with that logic intact and found new utility in a country built on hierarchy and extraction. Across centuries, it was used to arrest freed Black men for walking without proof of employment, to justify the confinement of Indigenous people who had refused removal, to expel Chinese workers labeled as moral contagions, to target queer youth and disabled residents whose lives defied social norms. It appeared on signs and statutes alike, a vague but potent summons of disorder, always defined from above. It did not require action. It required only that someone be seen.
Now, the word has returned—not as metaphor or memory, but as mandate. On July 24, 2025, President Trump signed an executive order titled “Ending Crime and Disorder on America's Streets”—a sweeping directive that promises to fight “vagrancy” and reframes homelessness, addiction, and mental illness not as public health crises or systemic failures, but as threats to civic peace.
The order offers no new housing, no expanded care infrastructure, no commitment to addressing the material conditions that produce displacement. Instead, it offers a rubric for removal. Under its provisions, federal grants from Housing and Urban Development, Health and Human Service, the Department of Justice, and the Department of Transportation will prioritize jurisdictions that criminalize public presence—cities that ban urban camping, prohibit loitering, penalize “urban squatting,” and track individuals deemed out of bounds. Programs that offer harm reduction, low-barrier shelters, or evidence-based treatment models face new restrictions or disqualification. Legal safeguards against involuntary psychiatric commitment are to be rolled back, consent decrees reversed, and behavioral nonconformity redefined as detainable.
Vagrancy persists because it works—not in reducing harm, but in reallocating blame. It shifts public anxiety about inequality, addiction, and disorder away from the systems that produce them and toward the individuals who cannot hide them.
This is not a departure from policy failure. It is the logical continuation of a governance model that confuses erasure with resolution. The language remains soft—beautification, humane treatment, restoration—but the infrastructure it supports is hard: surveillance in place of service, confinement in place of care, disappearance in place of dignity. It teaches agencies to measure success not by outcomes but by optics: How many tents are gone? How few bodies remain visible? How fully have we restored the image of control?
Vagrancy persists because it works—not in reducing harm, but in reallocating blame. It shifts public anxiety about inequality, addiction, and disorder away from the systems that produce them and toward the individuals who cannot hide them. It casts the existence of suffering as a provocation and conditions civic belonging on legibility, order, and stillness. In doing so, it grants governments a new kind of authority: the power not simply to punish what people do, but to penalize who they are when no performance is possible.
This order does not restore order. It reinstates a hierarchy of visibility. It tells those without shelter, treatment, or family that the problem is not what they lack—but that they can still be seen. And in doing so, it signals to the rest of us that our security lies in distance, that the absence of suffering from view is proof that it has been addressed. It invites the public to mistake silence for peace, stillness for stability, emptiness for care.
But the history of vagrancy tells a different story. It is a word that rises not in response to crisis, but in response to fear: the fear that the margins might speak, might move, might disrupt the fictions we tell about what this country is and who it serves. When the powerful feel that their order is slipping, they do not ask what has failed. They ask who can be removed.
If there is any hope in this moment, it lies in refusing the comfort of euphemism. This is not about restoration. It is about removal. Not about care, but control. Not about safety, but sightlines.
We do not have to accept the return of vagrancy into our political vocabulary. We can name it for what it is: a centuries-old code for managing the inconvenient poor, repackaged as policy. We can refuse to let language do the work of violence. And we can insist—still, again—that visibility is not disorder, and that survival, even unkempt, even unsanctioned, is not a threat to be eliminated.
It is a truth to be answered. With housing. With care. With courage. And with clarity
If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
Recent articles and books about artificial intelligence offer images of the future that align like iron filings around two magnetic poles—utopia and apocalypse.
On one hand, AI is said to be leading us toward a perfect future of ease, health, and broadened understanding. We, aided by our machines and their large language models (LLMs), will know virtually everything and make all the right choices to usher in a permanent era of enlightenment and plenty. On the other hand, AI is poised to thrust us into a future of unemployment, environmental destruction, and delusion. Our machines will gobble scarce resources while churning out disinformation and making deadly weapons that AI agents will use to wipe us out once we’re of no further use to them.
Utopia and apocalypse have long exerted powerful pulls on human imagination and behavior. (My first book, published in 1989 and updated in 1995, was Memories and Visions of Paradise: Exploring the Universal Myth of a Lost Golden Age; it examined the history and meaning of the utopian archetype.) New technologies tend to energize these two polar attractors in our collective psyche because toolmaking and language are humanity’s two superpowers, which have enabled our species to take over the world, while also bringing us to a point of existential peril. New technologies increase some people’s power over nature and other people, producing benefits that, mentally extrapolated forward in time, encourage expectations of a grand future. But new technologies also come with costs (resource depletion, pollution, increased economic inequality, accidents, and misuse) that evoke fears of an ultimate reckoning. Language supercharges our toolmaking talent by enabling us to learn from others; it is also the vehicle for formulating and expressing our hopes and fears. AI, because it is both technological and linguistic, and because it is being adopted at a frantic pace and so disruptively, is especially prone to triggering the utopia-apocalypse reflex.
Messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.
We humans have been ambivalent about technology at least since our adoption of writing. Tools enable us to steal fire from the gods, like the mythical Prometheus, whom the gods punished with eternal torment; they are the wings of Icarus, who flies too close to the sun and falls to his death. AI promises to make technology autonomously intelligent, thus calling to mind still another cautionary tale, “The Sorcerer’s Apprentice.”
What could go right—or wrong? After summarizing both the utopian and apocalyptic visions for AI, I’ll explore two questions: first, how do these extreme visions help or mislead us in our attempts to understand AI? And second, whom do these visions serve? As we’ll see, there are some early hints of AI’s ultimate limits, which suggest a future that doesn’t align well with many of the highest hopes or deepest fears for the new technology.
As a writer, I generally don’t deliberately use AI. Nevertheless, in researching this article, I couldn’t resist asking Google’s free AI Overview, “What is the utopian vision for AI?” This came back a fraction of a second later:
The utopian vision for AI envisions a future where AI seamlessly integrates into human life, boosting productivity, innovation, and overall well-being. It’s a world where AI solves complex problems like climate change and disease, and helps humanity achieve new heights.
Google Overview’s first sentence needs editing to remove verbal redundancy (vision, envisions), but AI does succeed in cobbling together a serviceable summary of its promoters’ dreams.
The same message is on display in longer form in the article “Visions of AI Utopia” by Future Sight Echo, who informs us that AI will soften the impacts of economic inequality by delivering resources more efficiently and “in a way that is dynamic and able to adapt instantly to new information and circumstances.” Increased efficiency will also reduce humanity’s impact on the environment by minimizing energy requirements and waste of all kinds.
But that’s only the start. Education, creativity, health and longevity, translation and cultural understanding, companionship and care, governance and legal representation—all will be revolutionized by AI.
There is abundant evidence that people with money share these hopes for AI. The hottest stocks on Wall Street (notably Nvidia) are AI-related, as are many of the corporations that contribute significantly to the NPR station I listen to in Northern California, thereby gaining naming rights at the top of the hour.
Capital is being shoveled in the general direction of AI so rapidly (roughly $300 billion just this year, in the U.S. alone) that, if its advertised potential is even half believable, we should all rest assured that most human problems will soon vanish.
Or will they?
Strangely, when I initially asked Google’s AI, “What is the vision for AI apocalypse?”, its response was, “An AI Overview is not available for this search.” Maybe I didn’t word my question well. Or perhaps AI sensed my hostility. Full disclosure: I’ve gone on record calling for AI to be banned immediately. (Later, AI Overview was more cooperative, offering a lengthy summary of “common themes in the vision of an AI apocalypse.”) My reason for proposing an AI ban is that AI gives us humans more power, via language and technology, than we already have; and that, collectively, we already have way too much power vis-à-vis the rest of nature. We’re overwhelming ecosystems through resource extraction and waste dumping to such a degree that, if current trends continue, wild nature may disappear by the end of the century. Further, the most powerful humans are increasingly overwhelming everyone else, both economically and militarily. Exerting our power more intelligently probably won’t help, because we’re already too smart for our own good. The last thing we should be doing is to cut language off from biology so that it can exist entirely in a simulated techno-universe.
Let’s be specific. What, exactly, could go wrong because of AI? For starters, AI could make some already bad things worse—in both nature and society.
Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.
There are many ways in which humanity is already destabilizing planetary environmental systems; climate change is the way that’s most often discussed. Through its massive energy demand, AI could accelerate climate change by generating more carbon emissions. According to the International Energy Agency, “Driven by AI use, the U.S. economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals.” The world also faces worsening water shortages; AI needs vast amounts. Nature is already reeling from humanity’s accelerating rates of resource extraction and depletion. AI requires millions of tons of copper, steel, cement, and other raw materials, and suppliers are targeting Indigenous lands for new mines.
We already have plenty of social problems, too, headlined by worsening economic inequality. AI could widen the divide between rich and poor by replacing lower-skilled workers with machines while greatly increasing the wealth of those who control the technology. Many people worry that corporations have gained too much political influence; AI could accelerate this trend by making the gathering and processing of massive amounts of data on literally everyone cheaper and easier, and by facilitating the consolidation of monopolies. Unemployment is always a problem in capitalist societies, but AI threatens quickly to throw millions of white-collar workers off payrolls: Anthropic’s CEO Dario Amodei predicts that AI could eliminate half of entry-level white-collar jobs within five years, while Bill Gates forecasts that only three job fields will survive AI—energy, biology, and AI system programming.
However, the most horrific visions for AI go beyond just making bad things worse. The title of a recent episode of The Bulwark Podcast, “Will Sam Altman and His AI Kill Us All?”, states the worst-case scenario bluntly. But how, exactly, could AI kill us all? One way is by automating military decisions while making weapons cheaper and more lethal (a recent Brookings commentary was titled, “How Unchecked AI Could Trigger a Nuclear War”). Veering toward dystopian sci-fi, some AI philosophers opine that the technology, once it’s significantly smarter than people, might come to view biological humans as pointless wasters of resources that machines could use more efficiently. At that point, AI could pursue multiple pathways to terminate humanity.
I don’t know the details of how AI will unfold in the months and years to come. But the same could be said for AI industry leaders. They certainly understand the technology better than I do, but their AI forecasts may miss a crucial factor. You see, I’ve trained myself over the years to look for limits in resources, energy, materials, and social systems. Most people who work in the fields of finance and technology tend to ignore limits, or even to believe that there are none. This leads them to absurdities, such as Elon Musk’s expectation of colonizing Mars. Earth is finite, humans will be confined to this planet forever, and therefore lots of things we can imagine doing just won’t happen. I would argue that discussions about AI’s promise and peril need a dose of limits awareness.
Arvind Narayanan and Sayash Kapoor, in an essay titled “AI Is Normal Technology,” offer some of that awareness. They argue that AI development will be constrained by the speed of human organizational and institutional change and by “hard limits to the speed of knowledge acquisition because of the social costs of experimentation.” However, the authors do not take the position that, because of these limits, AI will have only minor impacts on society; they see it as an amplifier of systemic risks.
In addition to the social limits Narayanan and Kapoor discuss, there will also (as mentioned above) be environmental limits to the energy, water, and materials that AI needs, a subject explored at a recent conference.
AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now.
Finally, there’s a crucial limit to AI development that’s inherent in the technology itself. Large language models need vast amounts of high-quality data. However, as more information workers are replaced by AI, or start using AI to help generate content (both trends are accelerating), more of the data available to AI will be AI-generated rather than being produced by experienced researchers who are constantly checking it against the real world. Which means AI could become trapped in a cycle of declining information quality. Tech insiders call this “AI model collapse,” and there’s no realistic plan to stop it. AI itself can’t help.
In his article “Some Signs of AI Model Collapse Begin to Reveal Themselves,” Steven J. Vaughan-Nichols argues that this is already happening. There have been widely reported instances of AI inadvertently generating fake scientific research documents. The Chicago Sun-Times recently published a “Best of Summer” feature that included forthcoming novels that don’t exist. And the Trump administration’s widely heralded “Make America Healthy Again” report included citations (evidently AI-generated) for non-existent studies. Most of us have come to expect that new technologies will have bugs that engineers will gradually remove or work around, resulting in improved performance. With AI, errors and hallucination problems may just get worse, in a cascading crescendo.
Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.
What will be the real future of AI? Here’s a broad-brush prediction (details are currently unavailable due to my failure to upgrade my crystal ball’s operating system). Over the next few years, corporations and governments will continue quickly to invest in AI, driven by its ability to cut labor costs. We will become systemically dependent on the technology. AI will reshape society—employment, daily life, knowledge production, education, and wealth distribution. Then, speeding up as it goes, AI will degenerate into a hallucinating, blithering cacophony of little voices spewing nonsense. Real companies, institutions, and households will suffer as a result. Then, we’ll either figure out how to live without AI, or confine it to relatively limited tasks and data sets. America got a small foretaste of this future recently, when Musk-led DOGE fired tens of thousands of federal workers with the expectation of replacing many of them with AI—without knowing whether AI could do their jobs (oops: Thousands are being rehired).
A messy neither-this-nor-that future is not what you’d expect if you spend time reading documents like “AI 2027,” five industry insiders’ detailed speculative narrative of the imminent AI future, which allows readers to choose the story’s ending. Option A, “slowdown,” leads to a future in which AI is merely an obedient, super-competent helper; while in option B, “race,” humanity is extinguished by an AI-deployed bioweapon because people take up land that could be better used for more data centers. Again, we see the persistent, binary utopia-or-apocalypse stereotype, here presented with impressive (though misleading) specificity.
At the start of this article, I attributed AI utopia-apocalypse discourse to a deep-seated tic in our collective human unconscious. But there’s probably more going on here. In her recent book Empire of AI, tech journalist Karen Hao traces polarized AI visions back to the founding of OpenAI by Sam Altman and Elon Musk. Both were, by turns, dreamers and doomers. Their consistent message: We (i.e., Altman, Musk, and their peers) are the only ones who can be trusted to shepherd the process of AI development, including its regulation, because we’re the only ones who understand the technology. Hao makes the point that messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.
Utopia and apocalypse feature prominently in the rhetoric of all cults. It’s no surprise, but still a bit of a revelation, therefore, to hear Hao conclude in a podcast interview that AI is a cult (if it walks, quacks, and swims like a cult... ). And we are all being swept up in it.
So, how should we think about AI in a non-cultish way? In his article, “We Need to Stop Pretending AI Is Intelligent,” Guillaume Thierry, a professor of cognitive neuroscience, writes, “We must stop giving AI human traits.” Machines, even apparently smart ones, are not humans—full stop. Treating them as if they are human will bring dehumanizing results for real, flesh-and-blood people.
The collapse of civilization won’t be AI generated. That’s because environmental-social decline was already happening without any help from LLMs. AI is merely adding a novel factor in humanity’s larger reckoning with limits. In the short run, the technology will further concentrate wealth. “Like empires of old,” writes Karen Hao, “the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.” In the longer run, AI will deplete scarce resources faster.
If AI is unlikely to be the bringer of destruction, it’s just as unlikely to deliver heaven on Earth. Just last week I heard from a writer friend who used AI to improve her book proposal. The next day, I went to my doctor for a checkup, and he used AI to survey my vital signs and symptoms; I may experience better health maintenance as a result. That same day, I read a just-published Apple research paper that concludes LLMs cannot reason reliably. Clearly, AI can offer tangible benefits within some fields of human pursuit. But we are fooling ourselves if we assume that AI can do our thinking for us. If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
I’m not currently in the job market and therefore can afford to sit on the sidelines and cast judgment on AI. For many others, economic survival depends on adopting the new technology. Finding a personal modus vivendi with new tools that may have dangerous and destructive side effects on society is somewhat analogous to charting a sane and survivable daily path in a nation succumbing to authoritarian rule. We all want to avoid complicity in awful outcomes, while no one wants to be targeted or denied opportunity. Rhetorically connecting AI with dictatorial power makes sense: One of the most likely uses of the new technology will be for mass surveillance.
Maybe the best advice for people concerned about AI would be analogous to advice that democracy advocates are giving to people worried about the destruction of the social-governmental scaffolding that has long supported Americans’ freedoms and rights: Identify your circles of concern, influence, and control; scrutinize your sources of information and tangibly support those with the most accuracy and courage, and the least bias; and forge communitarian bonds with real people.
AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now. Human greed and desire for greater control over nature and other people may lead toward paths of short-term gain. But, if you want a good life when all’s said and done, learn to live well within limits. Live with honesty, modesty, and generosity. AI can’t help you with that.