AI Policies Wiki
We seek to propose and evaluate policies on near-human-level and smarter-than-human artificial intelligence.
Anyone is welcome to participate!
The password required before editing this site is merely intended to stop link spammers who were vandalizing this site.
Background
An introduction to advanced AI
The Problem with advanced AI
NeuralNetworks based AI is evolving extremely rapidly
- However there are reasonable doubts as to whether there is or isn't a solid path from neural networks to AGI
- Lessons from neuroscience - neuroscience provides a second path to AGI
TheImpactOfNeuroscienceResearchOnAGIRisk - neuroscience research could lead to AGI
WhatNeuroscientistsDontYetKnow - much is unknown, but not everything might need to be figured out to achieve AGI
LevelsOfAbstractionOfTheBrainForAGI - spiking neuron models if implemented correctly would achieve AGI
- Expectations for the future
Prediction of when advanced AI might be common
Assessing HardwareOverhang - AGI will be relatively cheap once developed
Economic and financial effects of advanced AI
Evaluating policies
Despite supposedly being a policy site, the background section is much more fleshed out than this section which is relatively crude. This reflects the difficulty of formulating AGI policy.
- Prevent
Asimov's ThreeLawsOfRobotics appear unlikely to work
Ban KillerRobots
Develop a safety Box for smarter-than-human AI control
Regulate the development of human-level AI
Nationalize smarter-than-human AI
- Delay
ControlAccess to specific AI related technologies or information
Reduce ResearchFunding for specific risky technologies
Eliminate R&D tax credits or Tax AI research
Patent key technologies and only license them under certain terms
- Accelerate
AI Safety research
Promote socially beneficial AI ResearchNorms
Establish a ThreatLevelFramework to report on how serious and imminent the AI risk might be
ValueAlignment to ensure smarter-than-human AI has the same goals as humans
OpenResearch to put all AI researchers on a more even footing
Accelerate the development of smarter-than-human AI
Smarter-than-human AI as a NationalGoal
Deploy a global BasicIncome
Lobby for appropriate governmental policies
Run/vote/support candidates for PoliticalOffice
- Study
create a government AdvisoryCommittee
Reporting for organizations seeking to develop human-level AI
MetaResearch exploring the risks of human-level AI
Establish a ClearingHouse to collate the results of such research
At this stage, it is important not to advocate any one policy, but to explore the range of policy options available. No one policy appears to offer a panacea, but together multiple policies might form a framework that is able to help appropriately shape or control advanced AI.
Cost-effectiveness
A preliminary cost-effectiveness sort of the different policies is:
|
cheap/easy |
|
expensive/hard |
|||
highly effective |
|
|
||||
|
ResearchFunding |
|||||
weakly effective |
AdvisoryCommittee |
|
These rankings are likely to change in the future as more information comes to hand. E.g. it isn't certain whether either ControlAccess or OpenResearch helps or harms. It is partially a question of framing. Is it worth taking a risk that the world goes boom sooner, in order to reduce the risk of the world going boom overall? The answer depends on how likely you think the world will ultimately go boom, and how you value future lives.
Documents
2016 Whitehouse Office of Science and Technology Policy Request for Information on Artificial Intelligence submission