SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
If the Global South acts now, it can help build a future where algorithms bridge divides instead of deepening them—where they enable peace, not war.
The world stands on the brink of a transformation whose full scope remains elusive. Just as steam engines, electricity, and the internet each sparked previous industrial revolutions, artificial intelligence is now shaping what has been dubbed the Fourth Industrial Revolution. What sets this new era apart is the unprecedented speed and scale with which AI is being deployed—particularly in the realms of security and warfare, where technological advancement rarely keeps pace with ethics or regulation.
As the United States and its Western allies pour billions into autonomous drones, AI-driven command systems, and surveillance platforms, a critical question arises: Is this arms race making the world safer—or opening the door to geopolitical instability and even humanitarian catastrophe?
The reality is that the West’s focus on achieving military superiority—especially in the digital domain—has sidelined global conversations about the shared future of AI. The United Nations has warned in recent years that the absence of binding legal frameworks for lethal autonomous weapons systems (LAWS) could lead to irreversible consequences. Yet the major powers have largely ignored these warnings, favoring strategic autonomy in developing digital deterrence over any multilateral constraints. The nuclear experience of the 20th century showed how a deterrence-first logic brought humanity to the edge of catastrophe; now, imagine algorithms that can decide to kill in milliseconds, unleashed without transparent global commitments.
So far, it is the nations of the Global South that have borne the heaviest cost of this regulatory vacuum. From Yemen to the Sahel, AI-powered drones have enabled attacks where the line between military and civilian targets has all but disappeared. Human rights organizations report a troubling rise in civilian casualties from drone strikes over the past decade, with no clear mechanisms for compensation or legal accountability. In other words, the Global South is not only absent from decision-making but has become the unintended testing ground for emerging military technologies—technologies often shielded from public scrutiny under the guise of national security.
Ultimately, the central question facing humanity is this: Do we want AI to replicate the militaristic logic of the 20th century—or do we want it to help us confront shared global challenges, from climate change to future pandemics?
But this status quo is not inevitable. The Global South—from Latin America and Africa to West and South Asia—is not merely a collection of potential victims. It holds critical assets that can reshape the rules of the game. First, these countries have youthful, educated populations capable of steering AI innovation toward civilian and development-oriented goals, such as smart agriculture, early disease detection, climate crisis management, and universal education. For instance, multilateral projects involving Indian specialists in the fight against malaria using artificial intelligence.
Second, the South possesses a collective historical memory of colonialism and technological subjugation, making it more attuned to the geopolitical dangers of AI monopolies and thus a natural advocate for a more just global order. Third, emerging coalitions—like BRICS+ and the African Union’s digital initiatives—demonstrate that South-South cooperation can facilitate investment and knowledge exchange independently of Western actors.
Still, international political history reminds us that missed opportunities can easily turn into looming threats. If the Global South remains passive during this critical moment, the risk grows that Western dominance over AI standards will solidify into a new form of technological hegemony. This would not merely deepen technical inequality—it would redraw the geopolitical map and exacerbate the global North-South divide. In a world where a handful of governments and corporations control data, write algorithms, and set regulatory norms, non-Western states may find themselves forced to spend their limited development budgets on software licenses and smart weapon imports just to preserve their sovereignty. This siphoning of resources away from health, education, and infrastructure—the cornerstones of sustainable development—would create a vicious cycle of insecurity and underdevelopment.
Breaking out of this trajectory requires proactive leadership by the Global South on three fronts. First, leading nations—such as India, Brazil, Indonesia, and South Africa—should establish a ”Friends of AI Regulation” group at the U.N. General Assembly and propose a draft convention banning fully autonomous weapons. The international success of the landmine treaty and the Chemical Weapons Convention shows that even in the face of resistance from great powers, the formation of “soft norms” can pave the way toward binding treaties and increase the political cost of defection.
Second, these countries should create a joint innovation fund to support AI projects in healthcare, agriculture, and renewable energy—fields where benefits are tangible for citizens and where visible success can generate the social capital needed for broader international goals. Third, aligning with Western academics and civil society is vital. The combined pressure of researchers, human rights advocates, and Southern policymakers on Western legislatures and public opinion can help curb the influence of military-industrial lobbies and create political space for international cooperation.
In addition, the Global South must invest in developing its own ethical standards for data use and algorithmic governance to prevent the uncritical adoption of Western models that may worsen cultural risks and privacy violations. Brazil’s 2021 AI ethics framework illustrates that local values can be harmonized with global principles like transparency and algorithmic fairness. Adapting such initiatives at the regional level—through bodies like the African Union or the Shanghai Cooperation Organization—would be a major step toward establishing a multipolar regime in global digital governance.
Of course, this path is not without obstacles. Western powers possess vast economic, political, and media tools to slow such efforts. But history shows that transformative breakthroughs often emerge from resistance to dominant systems. Just as the Non-Aligned Movement in the 1960s expanded the Global South’s agency during the Cold War, today, it can spearhead AI regulation to reshape the power-technology equation in favor of a fairer world order.
Ultimately, the central question facing humanity is this: Do we want AI to replicate the militaristic logic of the 20th century—or do we want it to help us confront shared global challenges, from climate change to future pandemics? The answer depends on the political will and bold leadership of countries that hold the world’s majority population and the greatest potential for growth. If the Global South acts now, it can help build a future where algorithms bridge divides instead of deepening them—where they enable peace, not war.
The time for action is now. Silence means ceding the future to entrenched powers. Coordinated engagement, on the other hand, could move AI from a minefield of geopolitical interests to a shared highway of cooperation and human development. This is the mission the Global South must undertake—not just for itself, but for all of humanity.
"We are concerned that Palantir's software could be used to enable domestic operations that violate Americans' rights."
A group of Democratic lawmakers on Monday pressed the CEO of Palantir Technologies about the company's hundreds of millions of dollars in recent federal contracts and reporting that the big data analytics specialist is helping the government build a "mega-database" of Americans' private information in likely violation of multiple laws.
Citing New York Times reporting from late last month examining the Colorado-based tech giant's hundreds of millions of dollars in new government contracts during the second term of U.S. President Donald Trump, Sen. Ron Wyden (D-Ore.) and Rep. Alexandria Ocasio-Cortez (D-N.Y.) led a letter to Palantir CEO Alex Karp demanding answers regarding reports that the company "is amassing troves of data on Americans to create a government-wide, searchable 'mega-database' containing the sensitive taxpayer data of American citizens."
NEW: It looks like Palantir is helping Trump build a mega-database of Americans' private information so he can target and spy on his enemies, or anyone. @aoc.bsky.social and I are demanding answers directly from Palantir.
[image or embed]
— Senator Ron Wyden (@wyden.senate.gov) June 17, 2025 at 7:10 AM
The letter continues:
According to press reports, Palantir employees have reportedly been installed at the Internal Revenue Service (IRS), where they are helping the agency use Palantir's software to create a "single, searchable database" of taxpayer records. The sensitive taxpayer data compiled into this Palantir database will likely be shared throughout the government regardless of whether access to this information will be related to tax administration or enforcement, which is generally a violation of federal law. Palantir's products and services were reportedly selected for this brazenly illegal project by Elon Musk's Department of Government Efficiency (DOGE).
Several DOGE members are former Palantir employees.
The lawmakers called the prospect of Americans' data being shared across federal agencies "a surveillance nightmare that raises a host of legal concerns, not least that it will make it significantly easier for Donald Trump's administration to spy on and target his growing list of enemies and other Americans."
"We are concerned that Palantir's software could be used to enable domestic operations that violate Americans' rights," the letter states. "Donald Trump has personally threatened to arrest the governor of California, federalized National Guard troops without the consent of the governor for immigration raids, deployed active-duty Marines to Los Angeles against the wishes of local and state officials, condoned violence against peaceful protestors, called the independent press 'the enemy of the people,' and abused the power of the federal government in unprecedented ways to punish people and institutions he dislikes."
"Palantir's troubling assistance to the Trump administration is not limited to its work for the IRS," the letter notes, highlighting the company's role in Immigration and Customs Enforcement's mass deportation efforts and deadly U.S. and allied military operations.
The letter does not mention Palantir's involvement in Project Nimbus, a cloud computing collaboration between Israel's military and tech titans Amazon and Google targeted by the No Tech for Apartheid movement over alleged human rights violations. But the lawmakers did note that companies including IBM, Cisco, Honeywell, and others have been complicit in human rights crimes in countries including Nazi Germany, apartheid South Africa, China, Saudi Arabia, and Egypt.
The lawmakers asked Karp to provide a list of all contracts awarded to Palantir, their dollar amount, the federal agencies involved, whether the company has any "red line" regarding human rights violations, and other information.
In addition to Wyden and Ocasio-Cortez, the letter is signed by Sens. Elizabeth Warren (D-Mass.), Jeff Merkley (D-Ore.), and Ed Markey (D-Mass.), and Reps. Summer Lee (D-Pa.), Jim McGovern (D-Mass.), Sara Jacobs (D-Calif.), Rashida Tlaib (D-Mich.), and Paul Tonko (D-N.Y.).
"This should be obvious but apparently we have to say it: Keep AI out of children's toys," said one advocacy group.
The watchdog group Public Citizen on Tuesday denounced a recently unveiled "strategic collaboration" between the toy company Mattel and the artificial intelligence firm OpenAI, maker of ChatGPT, alleging that the partnership is "reckless and dangerous."
Last week, the two companies said that they have entered into an agreement to "support AI-powered products and experiences based on Mattel's brands."
"By using OpenAI's technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety," according to the statement. They expect to announce their first shared product later this year.
Also, "Mattel will incorporate OpenAI's advanced AI tools like ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience," according to the statement.
Mattel's brands include several household names, such as Barbie, Hot Wheels, and Polly Pocket.
"This should be obvious but apparently we have to say it: Keep AI out of children's toys. Our kids should not be used as a social experiment. This partnership is reckless and dangerous. Mattel should announce immediately that it will NOT sell toys that use AI," wrote Public Citizen on X on Tuesday.
In a related but separate statement, Robert Weissman, co-president of Public Citizen, wrote on Tuesday that "endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children."
"It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm," he added.
The statement from Public Citizen is not the only instance where AI products for children have received pushback recently.
Last month, The New York Times reported that Google is rolling out its Gemini artificial intelligence chatbot for kids who have parent-managed Google accounts and are under 13. In response, a coalition led by Fairplay, a children's media and marketing industry watchdog, and the Electronic Privacy Information Center (EPIC) launched a campaign to stop the rollout.
"This decision poses serious privacy and online safety risks to young children and likely violates the Children's Online Privacy Protection Act (COPPA)," according to a statement from Fairplay and EPIC.
Citing the "substantial harm that AI chatbots like Gemini pose to children, and the absence of evidence that these products are safe for kids," the coalition sent a letter to Google CEO Sundar Pichai requesting the company suspend the rollout, and a second letter to the Federal Trade Commission requesting the FTC investigate whether Google has violated COPPA in rolling out Gemini to children under the age of 13.