SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"This should be obvious but apparently we have to say it: Keep AI out of children's toys," said one advocacy group.
The watchdog group Public Citizen on Tuesday denounced a recently unveiled "strategic collaboration" between the toy company Mattel and the artificial intelligence firm OpenAI, maker of ChatGPT, alleging that the partnership is "reckless and dangerous."
Last week, the two companies said that they have entered into an agreement to "support AI-powered products and experiences based on Mattel's brands."
"By using OpenAI's technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety," according to the statement. They expect to announce their first shared product later this year.
Also, "Mattel will incorporate OpenAI's advanced AI tools like ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience," according to the statement.
Mattel's brands include several household names, such as Barbie, Hot Wheels, and Polly Pocket.
"This should be obvious but apparently we have to say it: Keep AI out of children's toys. Our kids should not be used as a social experiment. This partnership is reckless and dangerous. Mattel should announce immediately that it will NOT sell toys that use AI," wrote Public Citizen on X on Tuesday.
In a related but separate statement, Robert Weissman, co-president of Public Citizen, wrote on Tuesday that "endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children."
"It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm," he added.
The statement from Public Citizen is not the only instance where AI products for children have received pushback recently.
Last month, The New York Times reported that Google is rolling out its Gemini artificial intelligence chatbot for kids who have parent-managed Google accounts and are under 13. In response, a coalition led by Fairplay, a children's media and marketing industry watchdog, and the Electronic Privacy Information Center (EPIC) launched a campaign to stop the rollout.
"This decision poses serious privacy and online safety risks to young children and likely violates the Children's Online Privacy Protection Act (COPPA)," according to a statement from Fairplay and EPIC.
Citing the "substantial harm that AI chatbots like Gemini pose to children, and the absence of evidence that these products are safe for kids," the coalition sent a letter to Google CEO Sundar Pichai requesting the company suspend the rollout, and a second letter to the Federal Trade Commission requesting the FTC investigate whether Google has violated COPPA in rolling out Gemini to children under the age of 13.
"Many nations are looking to Israel and its use of AI in Gaza with admiration and jealousy," said one expert. "Expect to see a form of Google, Microsoft, and Amazon-backed AI in other war zones soon."
Several recent journalistic investigations—including one published Tuesday by The Associated Press—have deepened the understanding of how Israeli forces are using artificial intelligence and cloud computing systems sold by U.S. tech titans for the mass surveillance and killing of Palestinians in Gaza.
The AP's Michael Biesecker, Sam Mednick, and Garance Burke found that Israel's use of Microsoft and OpenAI technology "skyrocketed" following Hamas' October 7, 2023 attack on Israel.
"This is the first confirmation we have gotten that commercial AI models are directly being used in warfare," Heidy Khlaaf, chief artificial intelligence scientist at the AI Now Institute and a former senior safety engineer at OpenAI, which makes ChatGPT, told the AP. "The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward."
As Biesecker, Mednick, and Burke noted:
Israel's goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a "game changer" in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.
According to the AP report, Israel buys advanced AI models from OpenAI and Microsoft's Azure cloud platform. While OpenAI said it has no partnership with the Israel Defense Forces (IDF), in early 2024 the company quietly removed language from its usage policy that prohibited military use of its technology.
The AP reporters also found that Google and Amazon provide cloud computing and AI services to the IDF via Project Nimbus, a $1.2 billion contract signed in 2021. Furthermore, the IDF uses Cisco and Dell server farms or data centers. Red Hat, an independent IBM subsidiary, sells cloud computing services to the IDF. Microsoft partner Palantir Technologies also has a "strategic partnership" with Israel's military.
Google told the AP that the company is committed to creating AI "that protects people, promotes global growth, and supports national security."
However, Google recently removed from its Responsible AI principles a commitment to not use AI for the development of technology that could cause "overall harm," including weapons and surveillance.
The AP investigation follows a Washington Post probe published last month detailing how Google has been "directly assisting" the IDF and Israel's Ministry of Defense "despite the company's efforts to publicly distance itself from the country's national security apparatus after employee protests against a cloud computing contract with Israel's government."
Google fired dozens of workers following their participation in "No Tech for Apartheid" protests against the use of the company's products and services by forces accused of genocide in Gaza.
"A Google employee warned in one document that if the company didn't quickly provide more access, the military would turn instead to Google's cloud rival Amazon, which also works with Israel's government under the Nimbus contract," wrote Gerrit De Vynck, author of the Post report.
"As recently as November 2024, by which time a year of Israeli airstrikes had turned much of Gaza to rubble, documents show Israel's military was still tapping Google for its latest AI technology," De Vynck added. "Late that month, an employee requested access to the company's Gemini AI technology for the IDF, which wanted to develop its own AI assistant to process documents and audio, according to the documents."
Previous investigations have detailed how the IDF also uses Habsora, an Israeli AI system that can automatically select airstrike targets at an exponentially faster rate than ever before.
"In the past, there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day," former IDF Chief of Staff Aviv Kochavi told Yuval Abraham of +972 Magazine, a joint Israeli-Palestinian publication, in 2023. Another intelligence source said that Habsora has transformed the IDF into a "mass assassination factory" in which the "emphasis is on quantity and not quality" of kills.
Compounding the crisis, in the heated hours following the October 7 attack, mid-ranking IDF officers were empowered to order attacks on not only senior Hamas commanders but any fighter in the resistance group, no matter how junior. What's more, the officers were allowed to risk up to 20 civilian lives in each strike, and up to 500 noncombatant lives per day. Days later, that limit was lifted. Officers could order any number of strikes as they believed were legal, with no limits on civilian harm.
Senior IDF commanders sometimes approved strikes they knew could kill more than 100 civilians if the target was deemed important enough. In one AI-aided airstrike targeting one senior Hamas commander, the IDF dropped multiple U.S.-supplied 2,000-pound bombs, which can level an entire city block, on the Jabalia refugee camp in October 2023. According to the U.K.-based airstrike monitor Airwars, the bombing killed at least 126 people, 68 of them children, and wounded 280 others. Hamas' Qassam Brigades said four Israeli and three international hostages were also killed in the attack.
Then there's the mass surveillance element. Independent journalist Antony Loewenstein recently wrote for Middle East Eye that "corporate behemoths are storing massive amounts of information about every aspect of Palestinian life in Gaza, the occupied West Bank, and elsewhere."
"How this data will be used, in a time of war and mass surveillance, is obvious," Loewenstein continued. "Israel is building a huge database, Chinese-state style, on every Palestinian under occupation: what they do, where they go, who they see, what they like, what they want, what they fear, and what they post online."
"Palestinians are guinea pigs—but this ideology and work doesn't stay in Palestine," he said. "Silicon Valley has taken note, and the new Trump era is heralding an ever-tighter alliance among Big Tech, Israel, and the defense sector. There's money to be made, as AI currently operates in a regulation-free zone globally."
"Think about how many other states, both democratic and dictatorial, would love to have such extensive information about every citizen, making it far easier to target critics, dissidents, and opponents," Loewenstein added. "With the
far right on the march globally—from Austria to Sweden, France to Germany, and the U.S. to Britain—Israel's ethno-nationalist model is seen as attractive and worth mimicking.
"Even if it turns out to be structured to avoid antitrust law enforcement, it plainly will concentrate power in a small number of corporate hands," said Public Citizen co-president Robert Weissman.
U.S. President Donald Trump on Tuesday highlighted a new private-sector initiative to invest as much as $500 billion over four years into developing infrastructure to support artificial intelligence, starting with a raft of power-intensive data centers in Texas. The move drew swift criticism from one watchdog group on antitrust and environmental grounds.
The initiative, Stargate, is a joint venture of the tech firms OpenAI, Oracle, and SoftBank. Trump hosted the leaders of those companies—OpenAI CEO Sam Altman, Oracle Chairman Larry Ellison, and SoftBank CEO Masayoshi Son—at the White House to announce the initiative just one day after he signed an executive order rolling back a Biden-era executive order implemented in 2023 that sought to put safeguards on AI.
"I think this will be the most important project of this era," said Altman, according to the Washington Post. "We wouldn't be able to do this without you, Mr. President," he added, though both the Post and and the Associated Press noted that the creation of the partnership predated Trump's return to the White House.
Biden's 2023 executive order on AI placed safety obligations on AI developers and called on federal agencies to examine the technology's risks. But Biden, too, was interested in boosting AI infrastructure development. Right before he departed, in early mid-January, Biden signed an executive order directing federal agencies to identify government sites that could be leased to private companies for the construction of AI data centers.
Environmental groups and tech advocacy groups have long advocated for greater safeguards on AI, pointing to the technology's potential impact on the climate emergency.
The average query in the AI-powered chatbot ChatGPT requires 10 times the amount of energy a Google search needs, and "in that difference lies a coming sea change in how the U.S., Europe, and the world at large will consume power—and how much that will cost," according to a 2024 analysis published by the investment firm Goldman Sachs. Goldman Sachs analysts believe that AI will represent about 19% of data center power demand by 2028.
AI infrastructure is also water intensive. Global AI demand is projected to require more water extraction in a year than the country of Denmark by 2027, according to one study.
"The alarming surge in these centers' energy demand is on track to extend the fossil fuel era... [and] it is already increasing costs for some consumers and threatens to bring about a larger affordability crisis, while lining the pockets of Big Tech billionaires," said Karen Orenstein, a director at the environmental group Friends of the Earth, following Biden's January executive order. "For the sake of our planet and its people, we need to rein in Big Tech and regulate AI," she said.
Meanwhile, the joint venture to build out AI infrastructure has also drawn scrutiny from one watchdog group over concerns of corporate concentration.
Public Citizen co-president Robert Weissman said Wednesday that "the new Stargate plan—at minimum—raises massive antitrust concerns. Even if it turns out to be structured to avoid antitrust law enforcement, it plainly will concentrate power in a small number of corporate hands."
"Absent a commitment to bring on new, renewable energy to power an even greater spike in AI power demand, the Stargate build out threatens to worsen the rush to climate catastrophe and to drive up consumer electric bills," he added.
Another observer, Jeffrey Westling of the American Action Forum, remarked on the timing of the announcement.
"Interesting to wait to announce this until the Trump Admin. Assuming its all private investment, maybe they were worried about FTC/DOJ antitrust scrutiny?" he wrote on X Tuesday.