Skip to main content
Scroll For More
Read

Controlling killer robots: how do we do it?

Drone

All the arguments for the development of lethal autonomous weapons are also arguments against them.

Jess Whyte

Autonomous weapons are cheap and fast but there is rising concern at their ability to make decisions that value human life.

Recently, soldiers in Sudan were ordered to fire at thousands of protestors outside military headquarters in central Khartoum as riot police and secret service personnel unleashed tear gas. The soldiers, instead of shooting at the crowd, fired their weapons into the air while demonstrators began to chant: “The army is protecting us” and “One people, one army”. 

But what if, instead of encountering regular Sudanese soldiers, these protestors faced killer robots? 

This question was posed by Associate Professor Jessica Whyte in the Faculty of Arts & Social Sciences at a UNSW Grand Challenge on Living with 21st Century Technology event at UNSW Sydney.

Associate Professor Whyte, Scientia Fellow in the School of Humanities & Languages (Philosophy) and UNSW Law, joined Scientia Professor Toby Walsh in Computer Science and Engineering and international security and disarmament specialist Matilda Byne on the event’s panel. Professor Lyria Bennett Moses, Director of the Allens Hub for Technology, Law and Innovation at UNSW Law, facilitated the discussion about the social implications around the widespread adoption of lethal autonomous weapons.

Lethal autonomous weapons – or killer robots – are intelligent machines that can select, detect and kill targets without human control. 

Many countries are racing to find ways to fight faster, more efficiently and to develop an edge on their adversaries. But can these weapons be regulated, are there moral justifications for their use, and who would be held accountable for a death at the hands of a killer robot?

The members of the panel were unanimous on the point that lethal autonomous weapons are incapable of fulfilling the requirements of international humanitarian law. Autonomous weaponry violates the Martens Clause – a provision of international humanitarian law that requires emerging technologies to be judged by the “principles of humanity and from the dictates of public conscience”. 

The panellists agreed that the automatic function of selecting and engaging a human target needs to have an element of human control, otherwise it dishonours human life and dignity.

Many pro-development experts argue that lethal autonomous weapons would obey humanitarian law far more consistently than humans: also, they would not be clouded by emotional responses or subject to error. 

Associate Professor Whyte is a political theorist who uses philosophy, history and political economy to analyse sovereignty, human rights, humanitarianism and militarism. She suggested that such pro-development arguments rest on the assumption that laws are fixed and not open to change. She also put forward the idea that we actually need human emotion to help us make moral decisions. 

By making war cheaper, by reducing the number of soldiers, it makes it far easier to wage a war.

The associate professor said that the military’s strategic decisions are founded on a series of situational judgements such as: does a strike need to be carried out to achieve the overall aim of the battle? And, if so, how many lives are at risk?

International humanitarian law requires that “the harm to civilians that results from an attack must not be excessive in relation to the anticipated military advantage of the attack”, she said.

She argued that ethical principles such as these can’t simply be slotted into a machine’s algorithm.

“This ‘proportionality’ standard requires human judgement and an understanding of the value of human lives.

“It isn’t an objective rule that can simply be programmed into an autonomous weapon system,” Associate Professor Whyte said.

Robots offer numerous potential operational benefits to the military. They can reduce a variety of long-term medical expenditures such as the cost of war-related injuries on the healthcare system. They can also stand in for humans in extremely hazardous scenarios such as exposure to nuclear substances and clearing minefields.

Supporters say the majority of human suffering, both psychological and physical, would be alleviated by deploying these machines on the battlefield.

However, Associate Professor Whyte said these arguments should also make us very worried.

“By making war cheaper, by reducing the number of soldiers, it makes it far easier to wage a war. 

“States that don’t risk their own soldiers in warfare have fewer barriers to launching wars,” she said.

If a machine is programmed to select its own targets, there are real questions about who will be responsible if it kills civilians or violates international humanitarian law.

Professor Walsh expanded on this, saying that with the rise of 3D printing, it is becoming easier to build these types of weaponry without a full evaluation of their consequences. 

"Killer robots will lower the barrier to war. If one side can launch an attack without fear of bodies coming home, then it is much easier to slip into battle," he said. 

The panel expressed concern around the notion of accountability for harm caused by the robots. 

Professor Whyte said there is no evidence that there could be accountability once lethal weapons become fully autonomous.

“If a machine is programmed to select its own targets, there are real questions about who will be responsible if it kills civilians or violates international humanitarian law,” she said. 

A robot can’t substitute for humans in any legal proceedings. Also, a variety of legal obstacles means operators and military commanders, programmers, coders and manufacturers could all escape liability.

“Autonomous weapons systems will make targeting decisions more quickly than humans are able to follow. Accountability in such circumstances will be particularly difficult.

“All the arguments for the development of lethal autonomous weapons are also arguments against them,” said Associate Professor Whyte.

“It is argued they will be faster, more efficient and will not have any [human] barriers to killing. Yet, all of this will also make war even more deadly, and potentially create further risks for civilians.”

And much like what we witnessed in Sudan, Professor Whyte noted, “there is also a real risk that authoritarian regimes will use autonomous lethal weapons to repress their own populations – which is something human soldiers are often unwilling to do”.

Story originally published on UNSW Newsroom.