Human rights advocates, tech experts, and critics of the United States' vast drone warfare program are outraged over the Google's secret agreement with the Pentagon—revealed in a pair of reports by Gizmodo and The Intercept—to develop artificial intelligence, or AI, that quickly analyzes drone footage.
Google has quietly been using its machine learning chops to make US military drones better killing machines. Heartwarming to know that using Google products also means helping the DoD to better killer drones. https://t.co/LmLMWyIXC4
— DHH (@dhh) March 6, 2018
Maybe I won’t buy that Google Assistant after all. https://t.co/Q0PUzlAfW4
— Matt Taibbi (@mtaibbi) March 6, 2018
Some critics pointed to Google's old motto, "Don't Be Evil," and the replacement, "Do the Right Thing," introduced in 2015 by Google's parent company, Alphabet.
Google building AI for drone assassination programs... Don't be evil? https://t.co/jEBeMe6Cig
— Trevor Paglen (@trevorpaglen) March 6, 2018
I hadn't noticed the "...but it's okay to help with drone strikes" footnote to the "don't be evil" line in Google's code of conduct. https://t.co/9jQu3F1JMa
— Peter Maass (@maassp) March 6, 2018
The reports, published Tuesday, outline details of the partnership between Google and the U.S. Department of Defense's Project Maven that were recently disclosed on a company mailing list. The internal discussion reportedly angered some Google employees, who Gizmodo reports "were outraged that the company would offer resources to the military for surveillance technology involved in drone operations" and pointed out that "the project raised important ethical questions about the development and use of machine learning."
The DOD's Project Maven—also known as the Algorithmic Warfare Cross-Functional Team (AWCFT)—launched last April, and "was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts" who haven't been able to keep up with the amount of footage collected by U.S. drones.
A spokesperson for Google said the company provides the Pentagon with "open source TensorFlow APIs that can assist in object recognition on unclassified data," and insisted "the technology flags images for human review, and is for non-offensive uses only."
However, The Intercept noted—pointing to earlier reports about the project—that the purpose of the AI tech is "to help drone analysts interpret the vast image data vacuumed up from the military's fleet of 1,100 drones to better target bombing strikes against the Islamic State."
While Google's spokesperson added that the company is "actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies," The Intercept also noted that "the military contract with Google is routed through a Northern Virginia technology staffing company called ECS Federal, obscuring the relationship from the public"—at least until it was revealed in Tuesday's reports.
Both reports also pointed out that Eric Schmidt, who recently stepped down as chairman of Alphabet, heads the Defense Innovation Board, a federal advisory committee established in 2016 "to encourage the military adoption of breakthrough technology," and which has developed recommendations for how the Department of Defense can better utilize tools from Silicon Valley to wage war abroad.
Gizmodo, citing meeting minutes, noted that "some members of the Board's teams are part of the executive steering group that is able to provide rapid input" on Project Maven, whose Pentagon director has expressed hope that the project will be "that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department."