Three Laws of Robotics

Asimov's three laws of robotics were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov's stories then detail all the ways such laws might go wrong.

Problems occur in defining of words like injure, harm, obey, human, and robot; the fact that there are many undesirable states that are possible for which the three laws say nothing about; and how to enforce the laws, or ensure they are adhered to.

AI Policies Wiki: ThreeLawsOfRobotics (last edited 2016-03-16 05:09:47 by GordonIrlam)