

SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.


Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.

NASA is testing AI that would avoid cloud cover in satellite imaging, like this 2015 image of a volcanic eruption.
AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.
Seven of nine planetary boundaries have been breached. Climate change, biosphere collapse, freshwater depletion and, for the first time, ocean acidification. These boundaries are the vital signs of a planet teetering beyond the range that sustained human civilization for 12,000 years. Alarm bells ring in every chart and graph of the Planetary Health Check 2025, yet our collective response remains inadequate.
Meanwhile, a technological revolution is underway. Artificial intelligence now processes vast satellite datasets to deliver near-real-time indicators of Earth's health. Initiatives from the Potsdam Institute and Stockholm Resilience Centre envision leveraging the latest satellite data and AI to create enhanced Earth monitoring systems, where machine-learning algorithms track carbon dioxide emissions, detect deforestation as it happens, and flag ecosystem stress long before human eyes register the crisis. AI promises faster, more precise environmental intelligence than ever before.
But there is a troubling blind spot in this approach. These powerful systems can quantify atmospheric CO2 down to decimal points, yet they cannot capture which communities suffer first when planetary boundaries break. They report that 22.6% of global land faces freshwater disturbance in streamflow, yet satellite dashboards remain silent on who lacks safe drinking water. They classify aerosol loading as within "safe" global limits even as monsoon disruptions devastate millions of farmers. Precise metrics obscure systemic inequities.
When aerosol pollution over South Asia weakens the monsoon—a lifeline for more than a billion people—satellites detect changing moisture indices but ignore caste-based water access, rural poverty, and entrenched social vulnerabilities that determine who drowns and who survives. Scholars warn of "computational asymmetries" and neocolonial dynamics in AI for climate action, perpetuating power imbalances by extracting information without empowering affected communities.
If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most.
Moreover, who controls these AI systems? Research centers in Europe and North America design and deploy them. Satellites are launched by NASA, the European Space Agency, and private firms. Datasets and codes are often proprietary. Access barriers exclude local researchers and grassroots organizations from meaningful participation. As a result, climate solutions driven by AI risk concentrating power in the same institutions that shaped the crisis rather than democratizing environmental protection.
This is not a call to reject AI in environmental science. On the contrary, these tools can transform early warning systems, improve emissions accounting, and optimize conservation strategies. The challenge lies in embedding justice at their core. We must ask urgent questions: Who has access to the data? Who shapes the algorithms? Who defines the metrics of success? AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.
First, codesign monitoring systems with frontline communities. Indigenous Peoples, smallholder farmers, informal settlements—they possess critical local knowledge about changing environmental conditions. Participatory data collection initiatives, community-controlled sensor networks, and open-source platforms can bridge global datasets with ground truth.
Second, adopt data sovereignty principles. Data gathered from the Global South must remain accessible to local stakeholders. Intellectual property should not become a barrier to research and advocacy. Partnerships between Western labs and regional institutions must prioritize capacity building and fair data governance, following frameworks like CARE Principles for Indigenous Data Governance.
Third, expand AI metrics beyond biophysical variables. Incorporate indicators of social vulnerability—income inequality, water access, health outcomes—to contextualize environmental data. For example, freshwater disturbance indices should be mapped alongside demographic data on marginalized groups.
Finally, dedicate funding to interdisciplinary teams blending Earth system scientists, social scientists, and justice advocates. Building equitable AI systems requires collaboration across domains. Grant programs should support projects that integrate algorithm development with community engagement and policy analysis.
The machines watching our planet's vital signs can tell us when thresholds are crossed. They cannot tell us who pays the price. If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most vulnerable, rather than just refine our awareness of a crisis we're already failing to solve.
Here, justice must guide the next revolution in environmental intelligence.
Dear Common Dreams reader, It’s been nearly 30 years since I co-founded Common Dreams with my late wife, Lina Newhouser. We had the radical notion that journalism should serve the public good, not corporate profits. It was clear to us from the outset what it would take to build such a project. No paid advertisements. No corporate sponsors. No millionaire publisher telling us what to think or do. Many people said we wouldn't last a year, but we proved those doubters wrong. Together with a tremendous team of journalists and dedicated staff, we built an independent media outlet free from the constraints of profits and corporate control. Our mission has always been simple: To inform. To inspire. To ignite change for the common good. Building Common Dreams was not easy. Our survival was never guaranteed. When you take on the most powerful forces—Wall Street greed, fossil fuel industry destruction, Big Tech lobbyists, and uber-rich oligarchs who have spent billions upon billions rigging the economy and democracy in their favor—the only bulwark you have is supporters who believe in your work. But here’s the urgent message from me today. It's never been this bad out there. And it's never been this hard to keep us going. At the very moment Common Dreams is most needed, the threats we face are intensifying. We need your support now more than ever. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. When everyone does the little they can afford, we are strong. But if that support retreats or dries up, so do we. Will you donate now to make sure Common Dreams not only survives but thrives? —Craig Brown, Co-founder |
Seven of nine planetary boundaries have been breached. Climate change, biosphere collapse, freshwater depletion and, for the first time, ocean acidification. These boundaries are the vital signs of a planet teetering beyond the range that sustained human civilization for 12,000 years. Alarm bells ring in every chart and graph of the Planetary Health Check 2025, yet our collective response remains inadequate.
Meanwhile, a technological revolution is underway. Artificial intelligence now processes vast satellite datasets to deliver near-real-time indicators of Earth's health. Initiatives from the Potsdam Institute and Stockholm Resilience Centre envision leveraging the latest satellite data and AI to create enhanced Earth monitoring systems, where machine-learning algorithms track carbon dioxide emissions, detect deforestation as it happens, and flag ecosystem stress long before human eyes register the crisis. AI promises faster, more precise environmental intelligence than ever before.
But there is a troubling blind spot in this approach. These powerful systems can quantify atmospheric CO2 down to decimal points, yet they cannot capture which communities suffer first when planetary boundaries break. They report that 22.6% of global land faces freshwater disturbance in streamflow, yet satellite dashboards remain silent on who lacks safe drinking water. They classify aerosol loading as within "safe" global limits even as monsoon disruptions devastate millions of farmers. Precise metrics obscure systemic inequities.
When aerosol pollution over South Asia weakens the monsoon—a lifeline for more than a billion people—satellites detect changing moisture indices but ignore caste-based water access, rural poverty, and entrenched social vulnerabilities that determine who drowns and who survives. Scholars warn of "computational asymmetries" and neocolonial dynamics in AI for climate action, perpetuating power imbalances by extracting information without empowering affected communities.
If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most.
Moreover, who controls these AI systems? Research centers in Europe and North America design and deploy them. Satellites are launched by NASA, the European Space Agency, and private firms. Datasets and codes are often proprietary. Access barriers exclude local researchers and grassroots organizations from meaningful participation. As a result, climate solutions driven by AI risk concentrating power in the same institutions that shaped the crisis rather than democratizing environmental protection.
This is not a call to reject AI in environmental science. On the contrary, these tools can transform early warning systems, improve emissions accounting, and optimize conservation strategies. The challenge lies in embedding justice at their core. We must ask urgent questions: Who has access to the data? Who shapes the algorithms? Who defines the metrics of success? AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.
First, codesign monitoring systems with frontline communities. Indigenous Peoples, smallholder farmers, informal settlements—they possess critical local knowledge about changing environmental conditions. Participatory data collection initiatives, community-controlled sensor networks, and open-source platforms can bridge global datasets with ground truth.
Second, adopt data sovereignty principles. Data gathered from the Global South must remain accessible to local stakeholders. Intellectual property should not become a barrier to research and advocacy. Partnerships between Western labs and regional institutions must prioritize capacity building and fair data governance, following frameworks like CARE Principles for Indigenous Data Governance.
Third, expand AI metrics beyond biophysical variables. Incorporate indicators of social vulnerability—income inequality, water access, health outcomes—to contextualize environmental data. For example, freshwater disturbance indices should be mapped alongside demographic data on marginalized groups.
Finally, dedicate funding to interdisciplinary teams blending Earth system scientists, social scientists, and justice advocates. Building equitable AI systems requires collaboration across domains. Grant programs should support projects that integrate algorithm development with community engagement and policy analysis.
The machines watching our planet's vital signs can tell us when thresholds are crossed. They cannot tell us who pays the price. If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most vulnerable, rather than just refine our awareness of a crisis we're already failing to solve.
Here, justice must guide the next revolution in environmental intelligence.
Seven of nine planetary boundaries have been breached. Climate change, biosphere collapse, freshwater depletion and, for the first time, ocean acidification. These boundaries are the vital signs of a planet teetering beyond the range that sustained human civilization for 12,000 years. Alarm bells ring in every chart and graph of the Planetary Health Check 2025, yet our collective response remains inadequate.
Meanwhile, a technological revolution is underway. Artificial intelligence now processes vast satellite datasets to deliver near-real-time indicators of Earth's health. Initiatives from the Potsdam Institute and Stockholm Resilience Centre envision leveraging the latest satellite data and AI to create enhanced Earth monitoring systems, where machine-learning algorithms track carbon dioxide emissions, detect deforestation as it happens, and flag ecosystem stress long before human eyes register the crisis. AI promises faster, more precise environmental intelligence than ever before.
But there is a troubling blind spot in this approach. These powerful systems can quantify atmospheric CO2 down to decimal points, yet they cannot capture which communities suffer first when planetary boundaries break. They report that 22.6% of global land faces freshwater disturbance in streamflow, yet satellite dashboards remain silent on who lacks safe drinking water. They classify aerosol loading as within "safe" global limits even as monsoon disruptions devastate millions of farmers. Precise metrics obscure systemic inequities.
When aerosol pollution over South Asia weakens the monsoon—a lifeline for more than a billion people—satellites detect changing moisture indices but ignore caste-based water access, rural poverty, and entrenched social vulnerabilities that determine who drowns and who survives. Scholars warn of "computational asymmetries" and neocolonial dynamics in AI for climate action, perpetuating power imbalances by extracting information without empowering affected communities.
If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most.
Moreover, who controls these AI systems? Research centers in Europe and North America design and deploy them. Satellites are launched by NASA, the European Space Agency, and private firms. Datasets and codes are often proprietary. Access barriers exclude local researchers and grassroots organizations from meaningful participation. As a result, climate solutions driven by AI risk concentrating power in the same institutions that shaped the crisis rather than democratizing environmental protection.
This is not a call to reject AI in environmental science. On the contrary, these tools can transform early warning systems, improve emissions accounting, and optimize conservation strategies. The challenge lies in embedding justice at their core. We must ask urgent questions: Who has access to the data? Who shapes the algorithms? Who defines the metrics of success? AI that monitors planetary health without a justice framework becomes sophisticated surveillance rather than equitable care.
First, codesign monitoring systems with frontline communities. Indigenous Peoples, smallholder farmers, informal settlements—they possess critical local knowledge about changing environmental conditions. Participatory data collection initiatives, community-controlled sensor networks, and open-source platforms can bridge global datasets with ground truth.
Second, adopt data sovereignty principles. Data gathered from the Global South must remain accessible to local stakeholders. Intellectual property should not become a barrier to research and advocacy. Partnerships between Western labs and regional institutions must prioritize capacity building and fair data governance, following frameworks like CARE Principles for Indigenous Data Governance.
Third, expand AI metrics beyond biophysical variables. Incorporate indicators of social vulnerability—income inequality, water access, health outcomes—to contextualize environmental data. For example, freshwater disturbance indices should be mapped alongside demographic data on marginalized groups.
Finally, dedicate funding to interdisciplinary teams blending Earth system scientists, social scientists, and justice advocates. Building equitable AI systems requires collaboration across domains. Grant programs should support projects that integrate algorithm development with community engagement and policy analysis.
The machines watching our planet's vital signs can tell us when thresholds are crossed. They cannot tell us who pays the price. If AI-driven planetary monitoring is to fulfill its promise, it must be designed to protect everyone, especially the most vulnerable, rather than just refine our awareness of a crisis we're already failing to solve.
Here, justice must guide the next revolution in environmental intelligence.