What Neuroscientists Don't Yet Know

Neuroscience is the study of how the brain works. If we knew how the brain worked we could replicate it. IBM's TrueNorth project had at one point on it's roadmap plans to build a system with 4 times the number of neurons as the human brain. The problem is not one of hardware, but one of software; we don't yet understand how the brain is wired and thus how it works. If we understood the algorithms and data structures used by the brain we could mimic them in silicon and software. Such mimicry need not be an exact replica, but need only capture the essential details as is the case with NeuralNetworks. Neuroscience is thus important as it helps determine how much hardware might be required to replicate human brain level performance, and it provides one path towards the construction of Artificial General Intelligence (AGI). With this in mind, here is a run down of what neursocientists don't yet know:

Clearly it will be a long time before we fully understand how the brain works. There are a lot of questions. Not all of them may need to be answered to achieve AGI. One thing is clear, the search space of possible neural network derived AGI architectures is very large. Most neurally inspired AGI projects can only attempt to pick answers to one or two of the above questions. This suggests that if something rivaling the complexity of the brain is necessary for AGI, it will be a long time before AGI is achieved. The question then is, how much of this complexity is really necessary for AGI?

How much of this complexity is necessary for AGI?

The argument against complexity being necessary for AGI is evolution works through a process of local optimization that typically leads to globally inefficient solutions. The entire reptilian brain might be capable of being replaced by a tiny piece of neocortex, but isn't because there is no evolutionary advantage to doing so, and a lot of ancillary processing baggage has come to rest on the reptilian brain, that would then need to be solved some other way. In short, evolution produces ugly-hack type solutions.

The argument for complexity being necessary for AGI is that if we seek to build a machine that displays human level abilities across the full range of human intellectual domains then AGI is likely going to need to possess the full range of human functionality, including functionality that might not be optimal from a pure intelligence perspective, but are necessary for the AI to fit in and excel in the present human defined world.

A middle ground might be that it will be relatively easy to develop simple AI architectures that greatly outperform humans in many domains but fail in some domains. But to create true AGI (which by definition is competitive with humans across all domains) would require much more complex and more difficult to develop AGI architectures that more closely mimic the human brain.

Further reading

AI Policies Wiki: WhatNeuroscientistsDontYetKnow (last edited 2018-04-09 06:21:54 by GordonIrlam)