Value Alignment

Value alignment seeks to ensure that the values of a smarter-than-human AI are in line with those of humans. Done properly it would ensure that human values prevail even though machines are smarter than people. Value alignment is difficult because the AI may self improve, and the values need to be faithfully transmitted from one instance to the next.

MIRI focuses on value alignment. So far they have been concerned with performing fundamental mathematical research underlying the value alignment problem.

AI Policies Wiki: ValueAlignment (last edited 2016-08-15 02:38:38 by GordonIrlam)