Three Laws of Robotics

Asimov's three laws of robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov's stories then detail all the ways such laws might go wrong.

Problems occur as a result of:

As a plot device for Asimov's stories the three laws work well, but as a serious Artificial General Intelligence policy they are sorely lacking.

AI Policies Wiki: ThreeLawsOfRobotics (last edited 2018-01-15 01:27:28 by GordonIrlam)