NASA satellite image of volcano and clouds

NASA is testing AI that would avoid cloud cover in satellite imaging, like this 2015 image of a volcanic eruption.

(Photo by NASA/USGS)

If AI Knows the Planet Is Dying, Why Can't It Tell Us Who's Drowning?

AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.

Seven of nine planetary boundaries have been breached. Climate change, biosphere collapse, freshwater depletion and, for the first time, ocean acidification. These boundaries are the vital signs of a planet teetering beyond the range that sustained human civilization for 12,000 years. Alarm bells ring in every chart and graph of the Planetary Health Check 2025, yet our collective response remains inadequate.

Meanwhile, a technological revolution is underway. Artificial intelligence now processes vast satellite datasets to deliver near-real-time indicators of Earth's health. Initiatives from the Potsdam Institute and Stockholm Resilience Centre envision leveraging the latest satellite data and AI to create enhanced Earth monitoring systems, where machine-learning algorithms track carbon dioxide emissions, detect deforestation as it happens, and flag ecosystem stress long before human eyes register the crisis. AI promises faster, more precise environmental intelligence than ever before.

But there is a troubling blind spot in this approach. These powerful systems can quantify atmospheric CO2 down to decimal points, yet they cannot capture which communities suffer first when planetary boundaries break. They report that 22.6% of global land faces freshwater disturbance in streamflow, yet satellite dashboards remain silent on who lacks safe drinking water. They classify aerosol loading as within "safe" global limits even as monsoon disruptions devastate millions of farmers. Precise metrics obscure systemic inequities.

When aerosol pollution over South Asia weakens the monsoon—a lifeline for more than a billion people—satellites detect changing moisture indices but ignore caste-based water access, rural poverty, and entrenched social vulnerabilities that determine who drowns and who survives. Scholars warn of "computational asymmetries" and neocolonial dynamics in AI for climate action, perpetuating power imbalances by extracting information without empowering affected communities.

If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most.

Moreover, who controls these AI systems? Research centers in Europe and North America design and deploy them. Satellites are launched by NASA, the European Space Agency, and private firms. Datasets and codes are often proprietary. Access barriers exclude local researchers and grassroots organizations from meaningful participation. As a result, climate solutions driven by AI risk concentrating power in the same institutions that shaped the crisis rather than democratizing environmental protection.

This is not a call to reject AI in environmental science. On the contrary, these tools can transform early warning systems, improve emissions accounting, and optimize conservation strategies. The challenge lies in embedding justice at their core. We must ask urgent questions: Who has access to the data? Who shapes the algorithms? Who defines the metrics of success? AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.

So How Do We Move Forward?

First, codesign monitoring systems with frontline communities. Indigenous Peoples, smallholder farmers, informal settlements—they possess critical local knowledge about changing environmental conditions. Participatory data collection initiatives, community-controlled sensor networks, and open-source platforms can bridge global datasets with ground truth.

Second, adopt data sovereignty principles. Data gathered from the Global South must remain accessible to local stakeholders. Intellectual property should not become a barrier to research and advocacy. Partnerships between Western labs and regional institutions must prioritize capacity building and fair data governance, following frameworks like CARE Principles for Indigenous Data Governance.

Third, expand AI metrics beyond biophysical variables. Incorporate indicators of social vulnerability—income inequality, water access, health outcomes—to contextualize environmental data. For example, freshwater disturbance indices should be mapped alongside demographic data on marginalized groups.

Finally, dedicate funding to interdisciplinary teams blending Earth system scientists, social scientists, and justice advocates. Building equitable AI systems requires collaboration across domains. Grant programs should support projects that integrate algorithm development with community engagement and policy analysis.

The machines watching our planet's vital signs can tell us when thresholds are crossed. They cannot tell us who pays the price. If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most vulnerable, rather than just refine our awareness of a crisis we're already failing to solve.

Here, justice must guide the next revolution in environmental intelligence.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.