AI and Robotics Companies Ask UN to Ban 'Killer Robots': Page 2 of 3

Representatives of prominent AI and robots companies, including Google, Tesla, and Universal Robotics, have called for the UN to ban killer robots. But the issue is turning out to be as complex as the robots themselves.
  • and policies to prohibit the development, production, and use of fully autonomous weapons."

  • "Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases."

However, experts fall on both sides of the debate and many believe the issue around autonomous weapons is not so cut and dry. In a 2015 letter published in Communications of the ACM , Ronald Arkin, a robotics researcher and roboethicist at the Georgia Institute of Technology, pointed to the humanitarian benefits of deploying autonomous robots on the battlefield. “I am not Pro Lethal Autonomous Weapon Systems (LAWS), nor for lethal weapons of any sort...” Arkin wrote, “But if humanity persists in entering into warfare, which is an unfortunate underlying assumption, we must protect the innocent noncombatants in the battlespace far better than we currently do... I have the utmost respect for our young men and women in the battlespace, but they are placed into situations where no human has ever been designed to function.”

In a recent article in Wired, other experts agreed that an outright ban on LAWS is too impractical at this point, particularly at the international level. Roger Cabiness, a Pentagon spokesperson, told Wired that autonomous weapons offer benefits like increased precision that can actually help soldiers meet legal and ethical obligations. In that same article Rebecca Crootof, a researcher at Yale Law School, encouraged regulation over an outright ban. “International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians,” she told Wired.


A video from The Campaign to Stop Killer Robots as explains the background of the UN's involvement with LAWS


While autonomous weapons have already entered modern warfare, with weapons such as the MQ-9 Reaper Drone already being deployed, and others, like the Modular Advanced Armed Robotic System  ( MAARS) from QinetiQ, in active development, the discussion around autonomous weapons gone on before such things were even feasible. The 1920 play Rossum's Universal Robots , (which first gave us the term “robot”), posited the idea of a rebellion of manufactured clones. And in 1942 noted science fiction author Isaac Asimov introduced the three laws of robotics in his short story, Runaround:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In the ensuing decades killer robots and malicious AI have been in the inspiration for a wide range of movies and TV from Terminator to Battlestar

Comments (0)

Please log in or register to post comments.
By submitting this form, you accept the Mollom privacy policy.
  • Oldest First
  • Newest First
Loading Comments...