

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
“This is really setting a precedent,” said one activist. "This is something that other communities can look to."
The nationwide backlash against the artificial intelligence industry entered a new stage on Tuesday after a small Wisconsin city overwhelmingly passed a first-of-its-kind referendum limiting AI data center construction.
According to a Wednesday report in Politico, voters in the Milwaukee suburb of Port Washington, home to roughly 12,000 residents, supported the data center restrictions by a margin of around 2-to-1.
The referendus requires town officials to seek voter permission before approving or providing tax incentives for any future data centers in the community, giving residents veto power over new projects.
Port Washington is already home to a $15 billion, 1.3-gigawatt data center funded by tech giants Oracle and OpenAI, and local residents wanted to ensure that no additional facilities are green lit without their express approval.
The referendum was pushed by a grassroots community organization called Great Lakes Neighbors United, which advocates "advancing transparency, environmental stewardship, and responsible development in Wisconsin."
Christine Le Jeune, founder of Great Lakes Neighbors United, told Politico that she hopes the work done limiting AI facilities' construction can be replicated nationwide.
“This is really setting a precedent,” Le Jeune, said. "This is something that other communities can look to."
Politico noted that similar anti-data center measures are coming up for votes later this year in communities across the US, including in Monterey Park, California; Augusta Township, Michigan; and Janesville, Wisconsin.
Opposition to AI data centers has become a major political issue in recent months, as local residents have objected to the large facilities consuming massive amounts of electricity and water, while also generating significant noise pollution.
Data centers also put a major strain on the US electrical grid, causing a spike in utility bills across the country. PJM Interconnection, the largest US grid operator that serves over 65 million people across 13 states, projected earlier this year that it will be a full six gigawatts short of its reliability requirements in 2027 thanks to the demands of data centers.
Sen. Bernie Sanders (I-Vt.) and Rep. Alexandria Ocasio-Cortez (D-NY) introduced a bill in March that would impose a nationwide moratorium on AI data center construction “until strong national safeguards are in place to protect workers, consumers, and communities, defend privacy and civil rights, and ensure these technologies do not harm our environment.”
At the same time, the AI industry is planning on spending big money in 2026 to influence elections, with the goal of passing legislation setting a single set of AI regulations that will take effect throughout the US, overriding any restrictions placed on the technology by state governments.
CNN reported in February that Leading the Future—a super political action committee (PAC) backed by venture capital firm Andreessen Horowitz and Palantir co-founder Joe Lonsdale, is pledging to spend at least $100 million to ensure AI-friendly candidates get elected to Congress this year.
How quickly is the Pentagon moving toward handing the nuclear keys over to AI systems and Big Tech? No one really knows.
Can we possibly get away from AI’s ubiquitous presence in our lives? But as long as AI is now in our faces 24/7, it’s time to seriously start pushing back about its outsized and overwhelming influence. Troubling stories tumble out of the media daily. Employees in a major fast-food chain must now wear AI headsets that tell them how friendly they’re being to customers and coaching them on their work. (AI is now posing as our servant, but in the years ahead will the dynamic be reversed?)
And then there is the looming data center controversy, with Big Tech companies rapidly taking over huge swaths of land across the US to build massive and environmentally unfriendly data centers. Fortunately, this trend is now emerging as a campaign issue given early and cascading effects on electricity prices. In general, AI is having a tough year in the court of public opinion. Witness this cover story in a recent issue of Time magazine: “The People vs AI.” The article noted that “a growing cross section of the public—from MAGA loyalists to Democratic socialists, pastors to policymakers, nurses to filmmakers—agree on at least one thing: AI is moving too fast…. A 2025 Pew poll found… the public thinks AI will worsen our ability to think creatively, form meaningful relationships, and make difficult decisions.” Along with Immigration and Customs Enforcement-related pushback, a spontaneous wellspring of grassroots activism appears to be bubbling up against the AI juggernaut and the patently undemocratic backdoor power grab by technocrats and the companies behind them.
One of the greatest concerns in the public sphere is AI’s rapid incorporation into present and future military campaigns. This is actively being encouraged by the Trump administration’s decision to give AI companies free reign to develop their products with minimal regulation and oversight. This is an existential train wreck waiting to happen, and it came into striking focus in the monthslong dispute between AI company Anthropic and the Pentagon. Although it was already using the Claude platform, Secretary of War Pete Hegseth was unhappy with the company’s refusal to use it to remove human decision-making from military operations and support accelerated mass surveillance of US citizens.
Anthropic’s move was that rarity in Big Tech circles, a strong and principled ethical stand against an administration that doesn’t seem to know what that is. Happy warrior Hegseth then branded the company as a “supply chain risk,” effectively banning further use by the Pentagon and punishing the company’s overall viability in the non-defense marketplace as well. Ever the opportunist, the CEO of OpenAI, Sam Altman, then jumped in to offer his AI platform to do what Anthropic wouldn’t. The matter is now in the courts.
Using AI to create what are called autonomous systems represents a quantum leap in the rapidly advancing business of modern weaponry. Paradoxically, weapons technology is being simultaneously downsized through the use of drones and smaller and sophisticated high-tech devices (such as mine sniffers) and upsized with the use of the AI systems designed to manage and control them.
This raises the very troubling picture of wars being conducted without much human oversight. It’s probably one reason even high- profile AI influencers and Big Tech CEOs have admitted (sometimes a little too casually) that the technology could destroy humanity given the right set of circumstances. While autonomous systems can apply to stand-alone weapons such as killer robots, the most worrying concern relates to the Pentagon’s desire to build and deploy command-and-control systems that remove military officers from the split-second decisions that need to be made in warfare. And yes, that includes nuclear weapons.
If AI is truly as superintelligent (and sentient) as its Big Tech proponents claim it is, then these systems should also be smart enough to refuse to participate in any projects that could degrade or destroy life on the planet.
How quickly is the Pentagon moving toward handing the nuclear keys over to AI systems and Big Tech? No one really knows. When questioned by a reporter on the matter, one senior official in the Trump administration weakly demurred, “The administration supports the need to maintain human control over nuclear weapons.”
AI experts and strategic thinkers say that a big driver of this process is that America’s top nuclear adversaries—Russia and China—are already using AI in their command-and-control systems. These developments are happening at lightning speed and are being further propelled by Epic Fury, the first AI-fueled war in US history. And let’s not be too laudatory about Anthropic. Its Claude system has been integrated with Palantir’s Maven to identify military targets. The Pentagon is still investigating whether Maven played any part in the horrific event when a US Tomahawk missile struck a girls’ elementary school killing more than 165.
What madness is this? By what shallow calculus can a handful of powerful individuals or shadowy organizations decide or even risk the fate of humanity? How do we put all of this dangerous thinking at the highest levels of our government into some kind of perspective that correlates with common sense and basic human decency? In our trajectory toward what some have called techno-feudalism, we have this apparent plunge into barbarity coupled with a powerful array of tools to accelerate it. When nuclear activist Helen Caldicott warned that Western civilization is “sleepwalking into Armageddon,” it was perhaps this particular kind of blindness that she had in mind. And the brilliant socio-biologist E.O. Wilson’s profound observation also springs to mind: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous.”
The rush to deploy AI as large-scale weaponry with every bit as much destructive potential as our existing nuclear arsenal is a tip off to the deeper motivations behind its development. In the meantime, some obvious questions need to be asked. Why aren’t government and academic institutions eager to apply these advanced AI tools to the many intractable problems that characterize world polycrisis such as global climate change or better distribution of scarce resources including food and water? Where are the urgent calls from those who serve in Congress to do so? Or why don’t we see headlines like “Harvard Inaugurates $100 Million AI Project to Address Climate Change”?
It seems pretty clear that AI justifications coming from the both the administration and Congress (not to mention that the establishment commentariat that serves them) invariably gravitate to enhancing corporate productivity or military use. And it’s equally clear that AI will also serve as yet another powerful mechanism of wealth transfer to the 1% and either knowingly or unknowingly act as a chaos agent in an increasingly unstable multipolar geopolitical world. If AI is truly as superintelligent (and sentient) as its Big Tech proponents claim it is, then these systems should also be smart enough to refuse to participate in any projects that could degrade or destroy life on the planet. I don’t see any evidence of this. Sadly, it looks like we may have to once again learn the hard way that information, knowledge, and wisdom all are very different things. And that while knowledge can be appropriated by powerful computers, wisdom will never be.
If AI is to fulfill its transformative potential, its benefits must be more equitably distributed, and its environmental costs more transparently accounted for.
Critics are buzzing about Jeff Bezos and Lauren Sánchez’s estimated $5 million Met Gala sponsorship, noting that while framed as philanthropy, it also serves as elite branding and may deliver limited benefit to the broader arts. A similar pattern appears in tech, where highly publicized giving, grants, and initiatives build brand visibility while directing relatively little to wider communities.
As an anthropologist who studies US corporations, I have seen firsthand how technology firms including Amazon, Google, and Microsoft frequently present their companies as a catalyst for economic development and employment opportunity. Large-scale initiatives are framed as serving the public interest, yet evidence reveals a persistent gap between these narratives and their material outcomes. Promised benefits such as job creation, regional development, and infrastructure investment tend to be unevenly distributed or shorter in duration than initially suggested.
Research on data centers underscores these concerns. Although construction phases generate temporary employment, long-term job creation is modest—often fewer than 200 permanent positions per facility. At the same time, AI infrastructure development places significant demands on land, energy, and water resources, and depends on extractive supply chains for minerals such as cobalt and lithium. The result is an extractive industry in which financial gains accrue primarily to tech investors, while the environmental and economic burdens are borne by local communities.
Recent projects across the United States make these dynamics visible. In Indiana, Bezos’s Amazon company cleared 1,200 acres of farmland to build an $11 billion data farm for training artificial intelligence models. In Luzerne County, Pennsylvania, Amazon bought land near a nuclear power plant by the Susquehanna River that used to be zoned for agriculture. Across the country, Gates’ Microsoft has advanced controversial data center projects despite local opposition over environmental strain, including in Michigan and Wisconsin.
Designating data centers as critical infrastructure should not exempt companies from regulatory oversight or fair contributions to the communities in which they operate.
Taken together, these cases point to the broader policy challenge of how to evaluate and govern technology infrastructure projects that are framed as public goods but function within extractive economic models.
Philanthropic initiatives often accompany these developments, shaping public perception of investors’ generosity, but leaving underlying dynamics unchanged. Bezos’ Earth Fund, for example, has directed billions toward climate-related efforts, but much of that funding supports technology that benefits his companies. Similarly, Bill Gates’ climate philanthropy has prioritized large-scale technological interventions, including proposals such as spraying sulfur dioxide into the stratosphere to dim sunlight and lower global temperatures—but scientists warn that such approaches carry significant risks for both public health and ecological systems.
Federal policy is accelerating the problem. President Donald Trump has declared a national emergency related to energy production and encouraged private investments in energy industries. Within this framework, data centers are now designated as critical to national security, given the role of AI in military and defense systems.
However, while federal policy actively courts investment, the communities hosting this infrastructure are often excluded from meaningful participation in its benefits.
At the state level, data center developers aggressively pursue and often secure substantial tax incentives as jurisdictions compete to attract investment. Indiana alone could forego up to $1 billion in tax revenue. Pennsylvania has yet to fully assess the fiscal impact of similar agreements. In Virginia and other states, data center operators are exempt from sales taxes on equipment and electricity, further reducing public returns.
The concentration of wealth and environmental burden extends beyond US borders. KoBold Metals, an AI-driven mineral exploration company backed by both Bill Gates and jeff Bezos, is expanding operations in the Democratic Republic of Congo. Using laser technology, the company seeks deposits of cobalt, copper, nickel, and lithium—materials essential to batteries and AI infrastructure. The Congo currently supplies about 76% of the world’s cobalt, placing it at the center of the global technology economy.
While such projects may generate economic opportunities, they also reproduce familiar patterns. As with data center development in the United States, claims of job creation and regional development warrant careful scrutiny, particularly in contexts marked by historical inequality and resource extraction.
Artificial intelligence and data infrastructure are now central to economic competitiveness and national security, and these priorities are legitimate. However, if AI is to fulfill its transformative potential, its benefits must be more equitably distributed, and its environmental costs more transparently accounted for. Designating data centers as critical infrastructure should not exempt companies from regulatory oversight or fair contributions to the communities in which they operate. Nor can philanthropic initiatives cloud scientists’ knowledge and recommendations.
Policy interventions are needed to rebalance these dynamics. To make the AI boom work for the public rather than just private investors, companies must fully disclose their water and energy consumption, so that communities can understand what they are giving up to big data centers. State and local governments should condition tax incentives on measurable public benefits, including a pre-set number of durable jobs and investments in local infrastructure. And voters must hold elected officials accountable—at the ballot box—for these agreements.
Additionally, mechanisms such as royalties or revenue-generating agreements—long applied in extractive industries like oil and natural gas—could ensure that communities hosting data centers receive a meaningful share of the wealth generated. While the federal government captures significant revenue tied to AI economic activity, state and local governments should, too.
If the AI sector is to gain any public legitimacy, it must take responsibility both for the technologies it develops and for the social environmental consequences of their deployment.