SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
If the Global South acts now, it can help build a future where algorithms bridge divides instead of deepening them—where they enable peace, not war.
The world stands on the brink of a transformation whose full scope remains elusive. Just as steam engines, electricity, and the internet each sparked previous industrial revolutions, artificial intelligence is now shaping what has been dubbed the Fourth Industrial Revolution. What sets this new era apart is the unprecedented speed and scale with which AI is being deployed—particularly in the realms of security and warfare, where technological advancement rarely keeps pace with ethics or regulation.
As the United States and its Western allies pour billions into autonomous drones, AI-driven command systems, and surveillance platforms, a critical question arises: Is this arms race making the world safer—or opening the door to geopolitical instability and even humanitarian catastrophe?
The reality is that the West’s focus on achieving military superiority—especially in the digital domain—has sidelined global conversations about the shared future of AI. The United Nations has warned in recent years that the absence of binding legal frameworks for lethal autonomous weapons systems (LAWS) could lead to irreversible consequences. Yet the major powers have largely ignored these warnings, favoring strategic autonomy in developing digital deterrence over any multilateral constraints. The nuclear experience of the 20th century showed how a deterrence-first logic brought humanity to the edge of catastrophe; now, imagine algorithms that can decide to kill in milliseconds, unleashed without transparent global commitments.
So far, it is the nations of the Global South that have borne the heaviest cost of this regulatory vacuum. From Yemen to the Sahel, AI-powered drones have enabled attacks where the line between military and civilian targets has all but disappeared. Human rights organizations report a troubling rise in civilian casualties from drone strikes over the past decade, with no clear mechanisms for compensation or legal accountability. In other words, the Global South is not only absent from decision-making but has become the unintended testing ground for emerging military technologies—technologies often shielded from public scrutiny under the guise of national security.
Ultimately, the central question facing humanity is this: Do we want AI to replicate the militaristic logic of the 20th century—or do we want it to help us confront shared global challenges, from climate change to future pandemics?
But this status quo is not inevitable. The Global South—from Latin America and Africa to West and South Asia—is not merely a collection of potential victims. It holds critical assets that can reshape the rules of the game. First, these countries have youthful, educated populations capable of steering AI innovation toward civilian and development-oriented goals, such as smart agriculture, early disease detection, climate crisis management, and universal education. For instance, multilateral projects involving Indian specialists in the fight against malaria using artificial intelligence.
Second, the South possesses a collective historical memory of colonialism and technological subjugation, making it more attuned to the geopolitical dangers of AI monopolies and thus a natural advocate for a more just global order. Third, emerging coalitions—like BRICS+ and the African Union’s digital initiatives—demonstrate that South-South cooperation can facilitate investment and knowledge exchange independently of Western actors.
Still, international political history reminds us that missed opportunities can easily turn into looming threats. If the Global South remains passive during this critical moment, the risk grows that Western dominance over AI standards will solidify into a new form of technological hegemony. This would not merely deepen technical inequality—it would redraw the geopolitical map and exacerbate the global North-South divide. In a world where a handful of governments and corporations control data, write algorithms, and set regulatory norms, non-Western states may find themselves forced to spend their limited development budgets on software licenses and smart weapon imports just to preserve their sovereignty. This siphoning of resources away from health, education, and infrastructure—the cornerstones of sustainable development—would create a vicious cycle of insecurity and underdevelopment.
Breaking out of this trajectory requires proactive leadership by the Global South on three fronts. First, leading nations—such as India, Brazil, Indonesia, and South Africa—should establish a ”Friends of AI Regulation” group at the U.N. General Assembly and propose a draft convention banning fully autonomous weapons. The international success of the landmine treaty and the Chemical Weapons Convention shows that even in the face of resistance from great powers, the formation of “soft norms” can pave the way toward binding treaties and increase the political cost of defection.
Second, these countries should create a joint innovation fund to support AI projects in healthcare, agriculture, and renewable energy—fields where benefits are tangible for citizens and where visible success can generate the social capital needed for broader international goals. Third, aligning with Western academics and civil society is vital. The combined pressure of researchers, human rights advocates, and Southern policymakers on Western legislatures and public opinion can help curb the influence of military-industrial lobbies and create political space for international cooperation.
In addition, the Global South must invest in developing its own ethical standards for data use and algorithmic governance to prevent the uncritical adoption of Western models that may worsen cultural risks and privacy violations. Brazil’s 2021 AI ethics framework illustrates that local values can be harmonized with global principles like transparency and algorithmic fairness. Adapting such initiatives at the regional level—through bodies like the African Union or the Shanghai Cooperation Organization—would be a major step toward establishing a multipolar regime in global digital governance.
Of course, this path is not without obstacles. Western powers possess vast economic, political, and media tools to slow such efforts. But history shows that transformative breakthroughs often emerge from resistance to dominant systems. Just as the Non-Aligned Movement in the 1960s expanded the Global South’s agency during the Cold War, today, it can spearhead AI regulation to reshape the power-technology equation in favor of a fairer world order.
Ultimately, the central question facing humanity is this: Do we want AI to replicate the militaristic logic of the 20th century—or do we want it to help us confront shared global challenges, from climate change to future pandemics? The answer depends on the political will and bold leadership of countries that hold the world’s majority population and the greatest potential for growth. If the Global South acts now, it can help build a future where algorithms bridge divides instead of deepening them—where they enable peace, not war.
The time for action is now. Silence means ceding the future to entrenched powers. Coordinated engagement, on the other hand, could move AI from a minefield of geopolitical interests to a shared highway of cooperation and human development. This is the mission the Global South must undertake—not just for itself, but for all of humanity.
"If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances."
While many experts agree that artificial intelligence holds tremendous potential for advancing medical science and human health, a group of international doctors and other specialists warned this week that AI "could pose an existential threat to humanity" and called for a moratorium on the development of such technology pending suitable regulation.
Responding to an open letter signed by thousands of experts calling for a pause on the development and deployment of advanced AI technology, pioneering inventor, futurist, and Singularity Group co-founder Ray Kurzweil—who did not sign the letter—said on Wednesday that "there are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields."
However, an analysis by an international group of physicians and related experts published in the latest edition of the peer-reviewed journal BMJ Open Health warns that "while artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic, and security-related determinants of health."
\u201cHealth experts call for a halt to self-improving general AI development until regulation catches up.\n\n@GlobalHealthBMJ warn of harms to patients, data privacy issues, and a worsening of social and health inequalities, among other potential dangers.\n https://t.co/7bO970xL4b\u201d— Future of Life Institute (@Future of Life Institute) 1683722827
According to the study:
The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm, issues with data privacy and security, and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare. One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.
Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned. It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.
The publication's authors highlighted three distinct sets of threats associated with the misuse of AI. The first of these is "the ability of AI to rapidly clean, organize, and analyze massive data sets consisting of personal data, including images."
\u201cArda of @Identity2_0 on automation within healthcare\n\nVisit https://t.co/JzCTyuarxi to hear more from Arda and Savena of Identity 2.0 #digitaldehumanisation #autonomy #automation #ai #techforgood #teamhuman #healthcare\u201d— Stop Killer Robots (@Stop Killer Robots) 1683190800
This can be utilized "to manipulate behavior and subvert democracy," the authors explained, citing the role of AI in attempts to subvert the 2013 and 2017 Kenyan elections, the 2016 U.S. presidential race, and the 2017 French presidential contest.
"When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts," the analysis contends.
The second set of threats concerns the development and deployment of lethal autonomous weapons systems—often referred to as "killer robots"—that can select, engage, and destroy human targets without meaningful human control.
The third threat set involves the many millions of jobs that experts predict will be lost due to the widespread deployment of AI technology.
\u201cTom and Jerry creators predicted Job loss due to AI 60 years back.\n\nThis is likely the outcome when to add Boston dynamics + GPT powered Context + visual AI\n\n\ud83d\udc4980% of current jobs we are training our graduates for will not be there in next 10 years\n\n\ud83d\udc49Massive upskillings and\u2026\u201d— Ashish Dogra (@Ashish Dogra) 1683117455
"While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide," the analysis states.
Furthermore, the paper warns of the threat of self-improving, general-purpose AI—or AGI—is "potentially all-encompassing":
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered. If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons, and all the digital systems that increasingly run our societies, could well represent the "biggest event in human history."
"With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing," the authors stressed. "The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI 'arms race,'" the analysis stresses. "It will also require decision-making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest."
"Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation."
"If AI is to ever fulfill its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances," the authors concluded.
The new analysis comes a week after the White House unveiled a plan meant to promote "responsible American innovation in artificial intelligence."
On Wednesday, Data for Progress published a survey showing that more than half of U.S. voters—including 52% of Democrats, 57% of Independents, and 58% of Republicans—believe the United States "should slow down AI progress."
\u201cNEW POLL: Voters are concerned about ChatGPT, and 62% of voters \u2014 including majorities of Democrats, Independents, and Republicans \u2014 support creating a federal agency to regulate standards for the development and use of AI systems.\n\nhttps://t.co/AkmL5givjZ\u201d— Data for Progress (@Data for Progress) 1683729812
According to the survey, 62% of voters also support the creation of a federal agency to regulate the development and deployment of AI technology.