AI and Robotics Companies Ask UN to Ban 'Killer Robots'

Representatives of prominent AI and robots companies, including Google, Tesla, and Universal Robotics, have called for the UN to ban killer robots. But the issue is turning out to be as complex as the robots themselves.

Chris Wiltz

September 11, 2017

7 Min Read
AI and Robotics Companies Ask UN to Ban 'Killer Robots'

Tesla CEO Elon Musk, Google Deepmind Founder Mustafa Suleyman, and Universal Robotics Founder Esben Østergaard are among the 116 robotics and artificial intelligence founders and experts who signed a recent open letter to the UN asking for a ban on lethal autonomous weapons systems (LAWS), more colloquially known as, “killer robots.”

The letter reads in part:

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

The letter, which represents the first time representatives of the robotics and AI industry have voiced a stance on LAWS, was released at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, Australia, and came in the wake of the cancellation of a UN meeting of a group of government experts (GGEs) to discuss LAWS that was to take place from August 21- 25. The meeting was canceled due to some states failing to pay their financial contributions to the UN. The letter calls for the High Contracting Parties to double up their efforts at the meeting, which has been rescheduled for November 13-17.

The Modular Advanced Armed Robotic System (MAARS) by QinetiQ is just one example of the number of autonomous weapons being actively developed or deployed on the battlefield. (Image source: QinetiQ)

The letter was organized by Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia, who also previously headed up a 2015 open letter calling for a ban on autonomous weapons. To date that letter has been signed by over 20,000 people, including AI and robotics researchers and others including Stephen Hawking, Steve Jobs, and Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” Walsh said in a statement regarding the 2017 letter. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialize war. We need to make decisions today choosing which of these futures we want.”

The UN first convened a meeting around LAWS in 2014, but since then there has been no progress toward any actual ban or regulation targeted specifically at autonomous weapons. The issue was first brought to international attention in 2012 with the release of a report from Human Rights Watch and Harvard Law School’s International Human Rights Clinic titled, Losing Humanity: The Case Against Killer Robots. The report recommends three actions for the international community to take against LAWS:

  • "Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument."

  • "Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons."

  • "Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases."

However, experts fall on both sides of the debate and many believe the issue around autonomous weapons is not so cut and dry. In a 2015 letter published in Communications of the ACM, Ronald Arkin, a robotics researcher and roboethicist at the Georgia Institute of Technology, pointed to the humanitarian benefits of deploying autonomous robots on the battlefield. “I am not Pro Lethal Autonomous Weapon Systems (LAWS), nor for lethal weapons of any sort...” Arkin wrote, “But if humanity persists in entering into warfare, which is an unfortunate underlying assumption, we must protect the innocent noncombatants in the battlespace far better than we currently do... I have the utmost respect for our young men and women in the battlespace, but they are placed into situations where no human has ever been designed to function.”

In a recent article in Wired, other experts agreed that an outright ban on LAWS is too impractical at this point, particularly at the international level. Roger Cabiness, a Pentagon spokesperson, told Wired that autonomous weapons offer benefits like increased precision that can actually help soldiers meet legal and ethical obligations. In that same article Rebecca Crootof, a researcher at Yale Law School, encouraged regulation over an outright ban. “International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians,” she told Wired.

A video from The Campaign to Stop Killer Robots as explains the background of the UN's involvement with LAWS

While autonomous weapons have already entered modern warfare, with weapons such as the MQ-9 Reaper Drone already being deployed, and others, like the Modular Advanced Armed Robotic System  (MAARS) from QinetiQ, in active development, the discussion around autonomous weapons gone on before such things were even feasible. The 1920 play Rossum's Universal Robots, (which first gave us the term “robot”), posited the idea of a rebellion of manufactured clones. And in 1942 noted science fiction author Isaac Asimov introduced the three laws of robotics in his short story, Runaround:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In the ensuing decades killer robots and malicious AI have been in the inspiration for a wide range of movies and TV from Terminator to Battlestar Galactica to Westworld. The promenence of killer robots in popular culture alongside the rapid pace of innovation around AI and robotics in general while likely only continue to spark concerns (if not outright fear) both in the industry and among the general public. 

“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” Ryan Gariepy, founder & CTO of Clearpath Robotics, and the first person to sign the 2017 letter, said in a statement released by his company. “We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

Where do you fall on the killer robot debate? Share your thoughts with us in the comments!

Artificial Intelligence: What Will the Future Be? 
Intelligent systems and robots will one day help us with routine tasks, handle dangerous jobs, and keep us company. But they could also make decisions that violate our ethical principles, take control of our lives, and disrupt society. Join Maria Gini — accompanied by her AI-enabled humanoid robot — during her keynote address at ESC Minneapolis Nov. 8-9, 2017, and explore state-of-the-art intelligent systems and discuss future developments and challenges. Click here to register today!

Chris Wiltz is a senior editor at  Design News  covering emerging technologies including VR/AR, AI, and robotics.

Sign up for the Design News Daily newsletter.

You May Also Like