Killer robots: The question of how to control lethal autonomous weapons

by Peter Griffin / 20 July, 2018
Samsung's robot patrolling the border between North and South Korea.

A sentry robot freezes a hypothetical intruder by pointing its machine gun during a test in South Korea. The robots are placed along the border with North Korea. Photo / Getty Images

RelatedArticlesModule - robots

The computer scientist who has become a leading voice on the threat posed by killer robots describes himself as an “accidental activist”.

But Professor Toby Walsh, a leading artificial intelligence researcher who first started sounding the warning over use of AI to develop lethal autonomous weapons in 2015, has helped create a global movement among his peers.

This week the University of New South Wales academic joined 2,400 AI scientists and engineers from around the world and 150 companies in signing a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”.

Among those taking the pledge were numerous engineers from Google and its AI division DeepMind, AI pioneer Stuart Russell and the mercurial chief executive of Tesla and SpaceX, Elon Musk, who has warned against the perils of runaway artificial intelligence.

Sitting in the cafeteria at UNSW’s Department of Computer Science and Engineering, Walsh tells NOTED that he doesn’t buy into Musk’s apocalyptic vision of artificial intelligence dominating humanity.

“Musk is a fantastic engineer but he is not an AI researcher,” says Walsh, a Briton, who worked in research positions all over the world before settling in Sydney.

“There are significant challenges. We still are only building idiot savants that do one narrow task well. We don't have any idea how to build general intelligence.”

The robot arms race 

But when it comes to AI’s use in lethal warfare, he is on the same page as the inventor - the world is on the brink of an algorithm and robot-driven arms race unless our leaders unite to prevent it.

In April, Walsh was in Geneva, trying to convince United Nations diplomats from 125 countries to consider a moratorium on the use of lethal autonomous weapons systems (LAWS), when news broke of the most recent chemical attacks in Syria.

“In a macabre way, it was very gratifying what was happening around chemical weapons in Syria, the way the world came together. Sanctions were imposed, diplomats were expelled,” he says.

Alongside the Campaign to Stop Killer Robots, which is run from Washington D.C. by former Wellingtonian and human rights campaigner Mary Wareham, Walsh put it to the UN that use of lethal autonomous weapons systems should receive similar treatment.

“Autonomous weapons will be terribly destabilising. Previously if you wanted to have a large army you had to be a superpower. Now you will just need some cheap robots,” says Walsh.

“The current balance of power could change dramatically.”

Professor Toby Walsh.

The nations gathered in Geneva are party to the Convention on Certain Conventional Weapons, which New Zealand was an original signatory to in 1981. It prohibits or restricts the use of a range of weapons, such as booby traps, landmines, incendiary weapons and “blinding laser weapons”, that can wreak havoc on the battlefield as well as among the civilian population.

Walsh wants the Convention updated to ban the use of lethal autonomous weapons systems, but the world’s main military powers, the U.S., Russia, Israel, France and the United Kingdom among them, think it could stifle the development of weapons capable of saving lives and reducing collateral damage. Walsh disagrees.

“We didn't stop chemistry,” he says returning to the chemical weapons parallel.

“But we did decide it was morally unacceptable to use chemical weapons.”

China breaks from the pack

China surprised delegates by announcing its desire to add a new protocol to the Convention, banning the use of fully autonomous lethal weapon systems. But the Chinese delegation stressed that any ban should apply to use only, leaving nations free to research and develop autonomous weapons.

Autonomous weapons are being developed by the armies of the world’s superpowers in partnership with industry. While true autonomous weapons are some way off, automation is already taking much of the human equation out of warfare.

Drones have already been automated to the extent that they can take off and land, conduct surveillance missions and, at the behest of a human ‘pilot’ holding a controller often thousands of kilometres away, fire missiles. 

The UK's Ministry of Defence said they could remove the human and replace it with a computer today,” says Walsh of the Taranis drone, which the British Ministry of Defence has spent hundreds of millions of dollars developing.

“That's a technically small step.”

South Korea’s Sentry robot, developed by a subsidiary of Samsung, can be armed with a machine gun and grenade launcher and uses motion and heat sensors to detect targets up to three kilometres away. They have been installed along the border with North Korea.

“There's still relatively little autonomy in the battlefield. But it is less than ten years away,” Walsh estimates.

New Zealand researchers and engineers were largely absent from the list of signatories to the pledge which Walsh co-organised and which was presented this week at the 2018 International Joint Conference on Artificial Intelligence in Stockholm.

But that likely reflects the fact that some of the ethical issues thrown up by AI’s rapid advances, such as the risk of algorithmic bias in AI systems making decisions across government, generally haven’t garnered much discussion locally yet.

New Zealand’s representative on disarmament at the United Nations, First Secretary Katy Donnelly, told the Geneva gathering that New Zealand had an “open mind” on the options for addressing the challenges posed by lethal autonomous weapons systems.

The human element

Settling on a definition of what exactly constitutes autonomous weapons was challenging, but New Zealand expected ‘human control’ to feature centrally in that definition.

“The ability to exercise human control is critical to whether a weapon would be able to comply with international humanitarian law as well as other requirements, such as rules of engagement,” said Donnelly.

“The absence of an agreed definition does not mean we are not able to move forward,” she added.

Part of the task for AI and robotics researchers is assisting in setting that definition.

Sending one of Boston Robotic’s sinister looking robot dogs out into the battlefield armed to the teeth with lethal weapons and computer vision to hunt down and kill targets would change the face of conventional warfare. Lethal it may be, but not autonomous unless the mechanical dog decides when to pull the trigger independently based on pre-programmed parameters.

The New Zealand AI Forum has urged our own Government to consider taking a more prominent international leadership role in the push for a moratorium on lethal autonomous weapons systems. 

Some countries, such as Belgium have acted unilaterally, declaring their own ban on the weapons, while 26 UN countries have voiced their support for some kind of UN-sanctioned ban.

But the issue is nuanced - some argue that autonomous weapons if properly designed, could save lives on the battlefield and make warfare less dangerous for civilians. AI is expected to save thousands of lives when driverless cars become common on our roads, eliminating human error such as distracted or drunk driving. Why wouldn’t the same apply for autonomous weapons?

“Those arguments don't stand up against the weight of arguments on the other side,” argues Walsh.

The US military, often co-opting industry, has been one of the biggest funder of AI research for weapons, but also for training and logistical purposes. Google last month said it would not renew a contract with the Pentagon to support Project Maven, a military R&D effort to use computer vision to make sense of imagery taken in the combat zone, including from drones.

Google was offering its expertise in artificial intelligence, big data and deep learning, but thousands of its employees signed a petition protesting the company’s involvement in it and a small handful of engineers quit Google.

Soon after, Google released its AI Principles, which include an undertaking not to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.

But Google chief executive, Sundar Pichai, made it clear that collaboration with the military on other projects was very much still on the cards.

“We will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” he wrote in a blog explaining the AI Principles.

Stupid AI

“It's stupid AI on the battlefield I'm really worried about,” says Walsh, as he takes NOTED through his lab, which features a mini football pitch, a team of pint-sized robotic players lined up along its side. His main research focus is building computer programs to automate tasks that we consider intelligent.

He points to leaked documents about the US drone programme that surfaced in 2015 and showed that in many cases, the US military did not know who it had killed in assassination strikes in Yemen and Somalia, but listed the victims as “enemies killed in action”.

When it comes to drones, taking the human out of the equation could actually make it harder to defend yourself against the enemy’s drones.

“The US military defends itself against drones with radio jamming,” explains Walsh.

“The weak link is the radio link, that's the way to bring down a drone and is how they've been brought down in Afghanistan. It's a real military advantage to remove the radio link. Now you can't be stopped. But you've got to find another way to defend yourself.”

He worries that AI-powered weapons could find their way onto the black market and into the hands of terrorist groups like Isis, which have employed drones to drop grenade-sized munitions on U.S. Forces.

The UN will hold its next meeting on lethal autonomous weapons systems next month, and Walsh and his colleagues hope the solidarity shown in the research and tech sectors will spur diplomats to commit to an international agreement.

“We cannot hand over the decision as to who lives and who dies to machines,” he said this week as the pledge was released.

“They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”


THE FULL TEXT OF THE PLEDGE

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

 

Latest

Win an Oscar Wilde prize pack, including books and double passes
100464 2018-12-14 10:08:08Z Win

Win an Oscar Wilde prize pack, including books and…

by The Listener

Enter and be in to win The Selfish Giant, The Picture of Dorian Gray, The Happy Prince and Other Stories, and a double pass to The Happy Prince.

Read more
Trends in 2018: What Kiwis searched for this year
100457 2018-12-14 09:38:50Z Life in NZ

Trends in 2018: What Kiwis searched for this year

by RNZ

Here's what piqued our interest this year.

Read more
The curious connection between these new celebrity biographies
100448 2018-12-14 09:13:59Z Books

The curious connection between these new celebrity…

by Russell Baillie

Our reviewer wades into a flood of celebrity biographies and memoirs and finds they’re all connected in some way.

Read more
Hop to it: The best Kiwi beers for summer
99461 2018-12-14 00:00:00Z Dining

Hop to it: The best Kiwi beers for summer

by Michael Donaldson

Here's what to crack open on a hot day, from our very own Kiwi brewers.

Read more
Te Atatu's new cafe The Sugar Grill celebrates local history and family
100443 2018-12-13 16:46:27Z Auckland Eats

Te Atatu's new cafe The Sugar Grill celebrates loc…

by Jean Teng

Nickson Clark's new cafe The Sugar Grill is a project 19 years in the making.

Read more
Time Magazine’s Person of the Year recognises the global assault on journalism
100431 2018-12-13 11:36:23Z World

Time Magazine’s Person of the Year recognises the …

by Peter Greste

Time Magazine has just announced its “Person of the Year” for 2018, and for once, it isn’t one person.

Read more
The devastating effect of discarded Christmas plastic
100417 2018-12-13 09:31:47Z Planet

The devastating effect of discarded Christmas plas…

by The Listener

Over the European holiday season, the Mediterranean Sea’s plastic load rockets by a whopping 40%, as people buy then simply ditch plastics like lilos.

Read more
The worst humanitarian crisis in the world that no one knows about
100384 2018-12-13 00:00:00Z World

The worst humanitarian crisis in the world that no…

by Todd Pitock

A country rarely in the media spotlight, here's why we can't ignore Chad.

Read more