The Impact of Neuroscience Research on AGI Risk
Neuroscience research has played an important role in the development of AI and will likely play an important role in the development of AGI. Historically neuroscience research has proceeded at a slow pace, but there are a number of big initiatives that could speed things up.
Historically AI was AGI
Historically Artificial Intelligence (AI) research was about creating a machine that achieved or exceeded human level intelligence. Over the past decade, as the field met with increasing success at solving real world problems that couldn't be solved using conventional programming techniques many AI practitioners have focused on applying AI techniques to these problems. There is overlap, and the distinction isn't precise, but there is a much smaller community of Artificial General Intelligence (AGI) researchers that continue to focus on the original goal. Most AGI researchers are attempting to extend techniques that are currently meeting with success. A few are trying an entirely different approach.
Historically AI was driven by neuroscience
Intelligence today is best exemplified by the human brain. It should come as no surprise then that many past breakthroughs in AI came by taking insights into how the brain works and copying them with various levels of fidelity.
Looking at AI as it is practiced today many of the key ideas are related to neuroscience:
- the perceptron / unit - a highly simplified abstraction of a neuron
- deep learning - similar to the stages in visual processing pathway
- reinforcement learning - in the brain dopamine plays the role of a temporal difference error signal
Not all key ideas in AI are however related to neuroscience:
- backpropagation and stochastic gradient descent - the way artificial neural networks learn today seems biologically implausible; the concepts here are derived from calculus
- Recurrent Neural Networks (RNN) and Long-Short-Term-Memory - the way RNNs and LSTM is used in AI today seems quite different from the brains recurrent networks
- support vector machines - a significant amount of machine learning success has come from the application of mathematical regression techniques to large data sets; to date this hasn't been integrated with the artificial neural network AI paradigm
Historically AI has been largely driven by the combination of neuroscience and math.
Neuroscience provides a pathway to AGI
Mimicking the brain at an appropriate level of fidelity clearly provides one pathway to AGI.
The brain is very complex, and we don't really understand how it works. We know it is composed of 1011 neurons wired together. But the number of possible ways in which 1011 neurons might be wired together is enormous. This makes it impossible to perform a brute force search of neurally inspired AGI architectures. One way to make progress is to take new insights from neuroscience, in effect using neuroscience as a guide as to which architectures to search over.
Not many individuals span both neuroscience and computer science, but there are enough that there don't appear to be any ideas from neuroscience that haven't attempted to be brought over to computer science. Google's DeepMind, is probably the world's leading AGI research company. DeepMind was successful in combining the concepts of neural networks and reinforcement learning. DeepMind is headed by a neuroscientist. OpenAI, another leading AGI company, doesn't appear to employ any neuroscientists.
Hardware is not the bottleneck
Computer science neural network units loose some fidelity when compared to spiking neural networks which more closely reflect how the brain works. IBM's TrueNorth spiking neural network hardware packs 1 million neurons on a chip. IBM's roadmap envisions 16 chips on a board, 256 boards in a rack, and 96 racks in a cluster of 4x1011 neurons. That is 4 times the number of neurons in the human brain. Pricing is hard to come by but a reasonable estimate is for human level brain performance somewhere around $80 per hour. The problem is of course that we don't know how to use software to wire the neurons of TrueNorth together. If spiking neurons are simply needed by the brain to implement a population code, then human brain level performance could be had far far more cheaply using computer science neural network units.
It seems unlikely that a fidelity higher than spiking neural networks is needed to achieve AGI (see LevelsOfAbstractionOfTheBrainForAGI. Synaptic plasticity, such as is required for Hebbian learning, exists at a lower level than spiking models, but it can be implemented at low cost in either a spiking or unit based model.
What neuroscientists don't know stands between us and AGI
If hardware isn't the bottleneck, all that is left is we haven't figured out the correct architecture for AGI. We could derive a functional architecture if we understood how the brain works.
Today there is much that neuroscientists don't know (see WhatNeuroscientistsDontYetKnow). And the pace of neuroscience has been slow. The brain is alone as the only organ which we don't really understand how it works. However, it isn't necessary to figure out everything about how the brain works to derive a functional AGI architecture. We might be 1 to 3 key neuroscience derived insights away from AGI. The pace of basic neuroscience might also be about to pick up.
Applied AI may accelerate or impede AGI
All of the resources spent on applied AI may accelerate or impede AGI. Acceleration comes from the focus on speeding up the AI hardware AGI might one day run on, as well as possible research spill over between applied AI and AGI.
Impediments are a result of applied AI sweeping up many of the problems that might otherwise have required additional steps towards AGI. TrueNorth is a good example of this. Its spiking hardware was able to be hand programmed to detect pedestrians in a self driving car application. A few years ago this would have been a major selling point, but today its ability to do this appears inferior to that of unit based applied AI. If spiking hardware is needed for AGI, a proposition I tend to doubt, then the maturity of unit based applied AI will have drawn resources away from a path towards AGI.
The atomic bomb and the moon weren't reached though a series of $100k-1m grants. And the brain is to complex to be figured out as a grad school project, or through a single PI grant. Yet this is traditionally how neuroscience has been funded. This is changing. There are now a number of large scale efforts to understand the brain. Understanding the brain at the intermediate (meso) to large (macro) scale will require large amounts of data, analysis, coordination, and money.
In the US the main organizations funding neuroscience research are the NIH and NSF. The 2017 NIH $6.7b in neuroscience related funding appears to be very largely based around understanding disease: mental illness, stroke, and Alzheimer's. Out of the first 50 FY2016 grants I scanned only 2 appeared related to furthering understanding of how the brain works independent of disease. In 2017 the NSF reported $150m in awarded grants containing that contain words "neuroscience", "neuron", "neural", or "synapse" (after excluding grants made by the computer science directorate which frequently contain the words "neural networks"). U.S. non-disease neuroscience research thus might be somewhere around $400m/year. In 2015 U.S. research and development funding across all areas amounted to 41% of the OECD global total. Thus the global non-disease neuroscience research budget is probably around $1,000m/year.
Cross checking the estimate of $1b/year for neuroscience research: PubMed reports 25,000 medical articles published in 2017 using the same search terms as before as part of the title/abstract. 11 out of the first 50 articles I scanned with these terms appeared to be basic neuroscience, that is unrelated to disease or computer science neural networks. In medicine productivity per researcher is roughly 0.5 papers per year. If we guesstimate the all up cost per researcher at $150k/year this amounts to $800m/year. This is in broad agreement with the earlier number.
Considering the World's population is pushing 8b this is very small: $0.10 per person per year.
There are a number of, currently national, new initiatives aimed at understanding the brain:
Human Brain Project - a 10 year $1,200m European Commission project project started in 2013. Focus tends to be towards modeling the brain. Doesn't explicitly mention AI as a goal, but includes a sub-project working on spiking neuromorphic hardware.
BRAIN Initiative - a 10 year $110m/year U.S. project launched in 2013 but starting in 2016. Focus tends to be towards imaging and mapping the neurons in the brain. Not focused on AI, but the MICrONS, a $100m sub-project of the BRAIN Initiative "seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain".
Brain/MINDS - a duration-unknown size-unknown Japanese project started in 2014. Focus tends to be on mapping the neurons in the brain. No direct AI focus.
China Brain Project - a 15 year size-unknown Chinese project started in 2016. Three focus areas: understanding how the brain works, treating brain disease, and developing machines with human intelligence.
Only the China Brain Project has a heavy AGI focus, but all of them carry the same risk. Once the way the brain works is understood, it will be relatively easy to translate it into AGI.
These new projects are significant, and they create new sources of funding for basic neuroscience, but equally importantly they serve to focus existing sources of funding on the question of how the brain performs cognition. None of them however appear to being pursued with the same resources or determination as something like the Manhattan or Apollo projects.
Will big neuroscience lead to AGI
Whether these projects will deliver is difficult to determine. It seems very likely they will be successful in mapping the brain. But whether that can be turned into insight into how the brain works is less clear. There are several possibilities:
- We fail to uncover any organizational principles. All we can do is mimic the brain without understanding how it works. This mimicry would require the use of NEURON, or some other Hodgkin-Huxley simulator, the computational demands of which are too great to be of much use.
- We successfully directly translate the organizational principles of the brain into spiking hardware. This would produce AGI at a cost of perhaps $80 per human brain hour today, but with computing costs falling by roughly a factor of 10 every 10 years, it would quickly out price humans.
- The neocortex uses a non-population code and we successfully translate the organizational principles of the brain into computer science neural network unit hardware. This likely provides cost-performance broadly similar to the previous outcome. It is likely that this level of understanding would also allow us to develop AGIs that are more powerful than a human brain (at higher cost).
- The neocortex uses a population code and we successfully translate the organizational principles of the brain into computer science neural network unit hardware. This likely provides performance 10 to 100 times faster, cheaper, or more intelligent than that of the previous outcome.
Will we understand AGI?
Neuroscience derived AGI is likely to be difficult to debug and we are unlikely to fully understand it. Other approaches to AGI, such as those explored by MIRI, seem more robust, but are in their infancy.
A lack of deep understanding of how the brain works means we might be able to mimic the brain, but not vastly outperform it. There are dangers here. It will be difficult to predict how the AGI will behave.
A deeper understanding of the brain will probably allow us to better understand how AGI behaves, and develop AGI that is orders of magnitude more powerful than the human brain. There are dangers here too. The concentration in power that might result will become to great. History is replete with examples of harm caused by a concentration in power, and very few examples where a concentration in power has benefited everyone in the world. It doesn't mean a negative outcome will occur this time around,, but it does mean we should tread very carefully.
The funding choice
It seems likely that someday we will figure out how the brain works and with it AGI, assuming AGI hasn't been reached by then already. Spending more on neuroscience, or making understanding the brain a national goal, will accelerate the process, while spending less, and not having a national goal to understand the brain, will delay it. But it isn't simply about spending, it is also about what money gets spent on. Spending on brain disease is benign. Spending on mimicry might be more dangerous. And some areas of neuroscience research, such as most of the understanding how the brain works are double-edged; the more we understand the better we can predict how AGI will behave, but also likely the sooner great concentrations in power will result.