Why advanced AI is a potential problem
Humans dominate the planet, not because we are stronger, but because we are smarter. Left unchecked, the development of smarter-than-human AI is likely to result in the establishment of a new world order. It seems unlikely that human values, such as love and compassion, would carry over to this new order, since these appear to be evolutionarily encoded forms of genetic self interest, and are not a consequence of intelligence.
It isn't clear that the computer science "problem solving" approach to AI (logic, support vector machines, Bayesian inference, genetic programming) will create a general purpose intelligence. This doesn't mean it won't, but they might only ever produce domain specific intelligence. It isn't clear they will create a general intelligence.
One approach that appears will succeed in producing a general intelligence is NeuralAI and in particular neuromorphic computing (understanding how the brain works, and building systems that model that). It is just a question of how long this approach might take. Here are our contingent estimates:
2040: superintelligence costing $7/hr or less, human-level AI costing $0.70/hr or less (see Prediction for basis of estimate)
The hardest part in making such a prediction is estimating how long it will be before we understand how the brain is wired, thus these predictions should be read as if superintelligence or human-level AI is developed then it will cost the specified amount. The Human Brain Project and the BRAIN Initiative are focusing heavily on understanding how the brain is wired.
There are three classes of problem to be concerned with:
- side-effects - e.g. most jobs getting replaced by cheaper AI technology. Side-effects result from advanced AI changing individuals available choices, costs, and outcomes.
- malevolence - e.g. preventing smarter-than-human AI being used to design biological weapons
- unexpected consequences - the many ways smarter-than-human AI might misconstrue our goals
Side-effects and malevolence can be understood by analyzing the impact of advanced AI on social Power.
Even if there was only a 10% chance that advanced AI might cause a seriously deleterious outcome, the potential magnitude of the problem (7.4 billion lives on planet Earth) would make the expected risk bigger than anything humanity has so far faced, and justify devoting serious resources to making sure we get advanced AI right.
We have a history of responding to problems after they occur, but failing to take proactive steps to prevent problems before they occur. Unfortunately with smarter-than-human AI it seems unlikely we will be able to successfully respond after the problem occurs.