Why Project Maven Is the Litmus Test For Google's New Principles

"The prospect of the world's largest technology companies training machines in the service of the world's most powerful military is unnerving to say the least."(Photo: IDF via Getty Images)

Why Project Maven Is the Litmus Test For Google's New Principles

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent.

Last week Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company's announcement that it will not renew its existing contract for Project Maven, the US Department of Defense's AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government's drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google's new principles mean in practice.

"Google's cooperation in the US drone programme, in any capacity, is extremely troubling."

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, unAs AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. der successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

If Google concludes that Project Maven contravenes its principles because of human rights concerns, then the company has no excuse not to cancel the contract immediately. If Google believes the project is in line with its principles then there must be fundamental loopholes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations - for us and for future generations. If they don't, some nightmare scenarios could unfold.

"The prospect of the world's largest technology companies training machines in the service of the world's most powerful military is unnerving to say the least."

Google's involvement in Project Maven sparked fears that fully autonomous weapon systems, able to select, engage and fire at targets without any human intervention, could be a step closer. The prospect of the world's largest technology companies training machines in the service of the world's most powerful military is unnerving to say the least. Do we really want to hand more power over to machines whose workings are opaque and which cannot be held accountable for mistakes or human rights violations?

Warfare has already changed dramatically in recent years - a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

"Machine learning is already being used by governments in a wide range of contexts that directly impact people's lives, including policing, welfare systems, criminal justice and healthcare."

The trend towards more autonomy without adequate human oversight is alarming for many reasons. Compliance with the laws of war requires human judgement - the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack. Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment and use of fully autonomous weapon systems. Public support from Google on this call would go a long way in rebuilding staff and public trust.

Of course, it's not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people's lives, including policing, welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

Responding to these risks, last month Amnesty International and Access Now launched the ground-breaking Toronto Declaration, a set of principles setting out how both states and companies can avoid discrimination and respect equality in the use of machine learning systems. We're now calling on all tech companies to endorse the Toronto Declaration, and to affirm their commitment to respecting human rights when developing AI.

"When the stakes are this high, 2019 is too long to wait."

Google's new principles go some way towards addressing these issues, but many of the details remain murky. For example, Google says it will "seek to avoid unjust impacts" regarding bias, but also asserts that the nature of this "differs across cultures and societies." What does this mean? And when Google commits to avoid developing technologies "whose purpose contravenes widely accepted principles of international law and human rights", who decides what is "widely accepted"? What does all this mean for Project Maven?

Whether Google will live up to its human rights responsibilities in practice remains to be seen. In the meantime, the company should listen to its employees' concerns - and meet its own new commitments - by cancelling the existing Project Maven contract immediately. When the stakes are this high, 2019 is too long to wait.

Join Us: News for people demanding a better world


Common Dreams is powered by optimists who believe in the power of informed and engaged citizens to ignite and enact change to make the world a better place.

We're hundreds of thousands strong, but every single supporter makes the difference.

Your contribution supports this bold media model—free, independent, and dedicated to reporting the facts every day. Stand with us in the fight for economic equality, social justice, human rights, and a more sustainable future. As a people-powered nonprofit news outlet, we cover the issues the corporate media never will. Join with us today!