• The Listener
  • North & South
  • Noted
  • RNZ

Why lethal autonomous weapons are causing international alarm

Photo/Getty Images

The threat of autonomous weapons, that could launch attacks without human input, including swarms of killer drones, is raising alarm among experts around the world.

We are yet to learn – either from flesh-and-blood or artificial intelligence sources – how far AI will take humanity. Driverless cars and the replacement of humans in certain tasks are already part of our existence, but it is in the design and making of weapons programmed with artificial intelligence that some see a threat to the future of mankind.

The devices are known as lethal autonomous weapons systems, sometimes shortened to Laws. There is debate about how to define autonomous weapons but, to all intents and purposes, they are devices that can identify, track and attack a target without human intervention. It may take the form of a drone, a gun, or a robot that may or may not have a humanoid form. The key element of an autonomous weapon is that once activated – switched on, if you like – it makes the decision itself about whether to attack a target, which may or may not be a human.

RelatedArticlesModule - Lethal autonomous weapons

A United Nations meeting in Geneva in August considered whether there should be a ban on the development of such weapons. The UN Convention on Certain Conventional Weapons had considered the issue several times already, most recently last November and in April. If ever a ban is declared, it will take the form of a protocol. Existing protocols cover mines, booby traps, incendiary devices and a number of other weapons. Nations are free to choose whether they sign or observe the protocols but the existence of the protocols helps countries become cautious about the weapons they use. The purpose of the convention, in its own words, is “to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately”.

Distinguishing between a weapon that is automatic and an autonomous weapon is to make judgments on a sliding scale. If you step on a landmine, it will explode and harm you, without, of course, having been programmed with any artificial intelligence. But a landmine can only react, not choose to act, and is thus not considered an autonomous weapon. A drone that identifies, say, a truck and tracks it and sends a film back to a human, who may or may not send a missile to destroy it, is not an autonomous weapon. If the drone has the capacity to identify, track and attack the truck without reference to a human, it would be fully autonomous. No human would be in the loop.

Those who deal with weapons in various stages of automation make distinctions of a weapon being “in the loop”, “on the loop” or “off the loop”. Being in the loop means that a human decides whether an attack will occur; being on the loop means that if a weapon decides that something or someone is chosen as a target, then there is still the possibility that a human can switch it off; being off the loop means that the weapon decides to conduct the attack and no human oversees it.

A 2006-vintage sentry robot built by Samsung. Photo/Getty Images

Do fully autonomous weapons exist? Some weapons are very close to it and there is little doubt that others are being developed. They may still require refinement. Among those being deployed are the SGR-A1 built by Samsung, a sentry robot used along the Korean Demilitarised Zone that detects intruders, gives a verbal warning and alerts a soldier, who can use the robot’s machine gun. If the robot is in a fully autonomous mode it can fire the gun itself. Israel has a robot that hunts for radar signals and when it detects them crashes into whatever is sending them, destroying itself as well as the signal source. The Lockheed Martin AGM-158C, a long-range anti-ship missile, was tested in May. According to the US company, it “flew towards a moving maritime target using inputs from on-board sensors. The missiles then positively identified the intended target and impacted successfully.”

Some weapons require instant responses and there is not enough time for a human to make an assessment. These include anti-missile devices and some defensive aircraft-attack weapons. The Terminal High Altitude Area Defence (Thaad) systems and their variants are deployed by a number of countries, including the US, Russia, China, India, France and Israel. Thaad was recently installed in South Korea, much to China’s annoyance, because in Beijing’s view it upset the regional balance of power. Thaad is designed to detect and intercept incoming missiles that carry nuclear weapons.

The refinements on other weapons being developed might include miniaturisation. One of the developments feared is tiny drones with face recognition software and a tiny gun. For a speculative look at their potential, click here. There are varying predictions about how far away autonomous weapons are but there are authoritative estimates that they will be developed within years, not decades. This month, the Pentagon announced that it plans to put US$2 billion into research on adding artificial intelligence to weaponry. Field commanders, who are reluctant to surrender human control over weaponry, want computers to be able to explain to them why a particular target has been chosen.

Elon Musk. Photo/Getty Images

Gunpowder, nukes, now this

Some observers of autonomous weapons development believe that, if deployed, the devices would mark the third phase of warfare: after gunpowder and nuclear arms.

The August meeting of the UN Convention on Certain Conventional Weapons was attended by states and also by a number of non-government organisations with long histories of doing humanitarian work, including the International Committee of the Red Cross and Human Rights Watch. Other opponents of the development of autonomous weapons have been formed specifically for the purpose. The Campaign to Stop Killer Robots, which is co-ordinated by New Zealander Mary Wareham, is a coalition of non-government organisations. Some formidable thinkers and world leaders in technology, including Elon Musk, Stuart Russell, professor of computer science at the University of California, Berkeley, and the late Stephen Hawking have warned against the development of autonomous weapons. Musk was an early supporter of the Future of Life Institute, an organisation mostly of scientists that works to ensure that the most powerful technologies will be beneficial to mankind.

Many artificial-intelligence specialists are wary of or have complained about the development of autonomous weapons, arguing they do not want to see their work made into weaponry. Google employees objected to links with arms manufacturers. An international boycott of a Japanese university was lifted only after the university said that it had dropped its defence industry links.

Autonomous weapons opponent Mary Wareham.

Among the reasons for opposing the development of autonomous weapons, the profoundest ethical one is that a machine would be deciding whether someone should live or die.

A legal objection is that it would be hard to ensure that the action of the robot or other machine would conform to international humanitarian law, the fundamental principles of which say that civilians should not be targeted in a war and that any offensive action should be proportionate. Another legal objection is that, at present, a soldier who fires a weapon bears some legal responsibility. Whether legal responsibility could be sheeted home if a robot was acting independently is an interesting point.

A further objection is that although a robot might be a more acute observer, be the store of much greater information and be able to react more quickly than a human, it would lack human common sense. One example used by those opposed to Laws is to imagine a child rushing at a soldier, whether robotic or human, pointing a toy gun and the child’s mother rushing after the child. The argument advanced is that a human soldier would be likely to grasp what was happening but a robot might not. Whether a robot could be programmed or taught to observe the laws of armed conflict is debatable. Against this it can be argued that tired and stressed human soldiers will sometimes make mistakes in combat.

Stephen Hawking. Photo/Getty Images

The bots are revolting

Robots might also go on fighting once humans have declared a truce, unnecessarily prolonging a conflict. The easy-seeming solution of switching off a robot may not be simple in practice because the switching-off process might be exploited by an enemy.

Another reason for concern about the development of autonomous weapons is that their commercial production would mean that they would eventually reach the black market and become available to terrorists, non-state actors and authoritarian regimes. Imagine what could be done with an adaptation of facial recognition technology and a tiny device that fired a bullet.

Other reasons for worrying about their development are that autonomous weapons might be able to be hacked and that technology can fail badly. Nuclear-strike false alarms have demonstrated this on several occasions.

Yet there are serious arguments advanced by those advocating the development and deployment of autonomous weapons, including:

  • They can add strength to a defence force. The term used is force multiplier.
  • They may be valuable for some of the worst situations soldiers face; for instance, clearing mines, entering a house that is likely to be booby-trapped or dismantling explosives.
  • Ethically, there is something to be said for destroying a machine rather than killing a person.
  • Strategically, unless a country studies autonomous weapons, it would not know how to combat them.
  • Using robots may be cheaper than employing human soldiers.

Autonomous weapons backer Vladimir Putin. Photo/Getty Images

I have so far used the terms machine and robot interchangeably. Much commercial effort has gone into making robots seem like humans and lifelike. However cute or even endearing a robot might seem (not the notion suggested by some film-makers), it is still a machine or a computer programmed to respond to certain questions or circumstances. It might have the capacity to learn; it might have the capacity to respond far faster than a human; and it will almost certainly be able to sift through information more rapidly than a human. But it is, in the end, a machine, without human consciousness and values. Some thought has been given to making ethical robots with weaponry but the challenges in design are formidable.

Many countries see the development of artificial intelligence as the new frontier and central to their own economic development. Russian President Vladimir Putin has voiced that view in opposing any ban on autonomous weapons.

New Zealand has yet to formulate its full response to the development of autonomous weapons. It attended the August UN meeting. The guidelines New Zealand observes are that any weapons in its armoury must be able to conform to the requirements of international humanitarian law, so it would not deploy any weapon in which a human was out of the loop. The Pentagon adheres to the same code.

One of the reasons little progress has been made in considering the banning of autonomous weapons is that it is not known exactly what weapons would be covered by any ban. Some have not yet been developed. Nevertheless, such weapons are on the way and whatever else artificial intelligence has in store for us, the issues the weapons raise go to the heart of being human and will not go away if we do not think about them. The race between control and deployment has already started.

Stuart McMillan is a senior fellow in the Centre for Strategic Studies at Victoria University of Wellington.

This article was first published in the September 29, 2018 issue of the New Zealand Listener.