Levels of Abstraction of the Brain for AGI

AGI is defined as the ability of a machine to perform any of the intellectual tasks a human brain is capable of performing.

The behavior of the human brain can be described with different levels of fidelity using many different models or abstractions. These abstractions can be ranked according to their expected computational cost. Ignore for now the issue of whether we know how to fully specify a complete instance of a particular abstraction.

Possible abstractions ranked in order of computational cost:

Spiking Models are the Highest Known Level of Abstraction for Achieving AGI

What is the highest level of abstraction we can reasonably expect would exhibit AGI? Higher level models might exhibit AGI, but this is currently quite speculative. Lower level models might also exhibit AGI, but would require more computational resources. This highest level of abstraction doesn't necessarily mean AGI is expected to be developed using this abstraction. Only, that it provides us with an upper bound estimate of the minimum computational resources required for AGI. With this much computational resources at hand, achieving AGI would no longer be an issue of needing more resources, but only one of how those resources are deployed.

Classification/regression, symbolism/logic, and probability based models have all met with some success as AI models, but are a long way from achieving AGI today. Since it isn't clear that they represent what the brain actually does, it isn't clear they will ever be able to achieve AGI.

Neural networks are closer in structure to the brain. However their atemporal nature differs markedly from the integrate and fire nature of real neurons. In addition there are substantial doubts over whether the manner in which they are trained, using back propagation and stochastic gradient descent, is similar to how the brain learns. This isn't to say AGI can't be developed using neural networks. They have the advantage of being much easier to analyze mathematically than spiking neuron models. But there can be reasonable doubts over whether neural networks will one day exhibit AGI.

Spiking neuron models are similar to the integrate and fire nature of real neurons. It seems reasonable to view the brain as a spiking neuronal computer. Spiking neurons models lack synaptic plasticity, which is essential to learning. Once the mechanisms of synaptic plasticity are fully understood, it seems synaptic plasticity could be added to spiking neuron models for only moderate cost. Spiking neuron models also fail to capture non-synaptic neurotransmitter levels, such as dopamine. This could be modeled though for virtually zero additional cost. Thus if we knew how to implement them correctly, it seems likely spiking neuron models with a few extensions would exhibit AGI.

Electrochemical models reproduce the behavior of real neurons almost perfectly. They even capture reverse propagation of spikes down dendrites. It seems highly likely that with a few extensions to capture synaptic plasticity and non-synaptic neurotransmitter levels, they would exhibit AGI.

Proteome models would only be necessary for AGI if the molecular pathways leading to synaptic plasticity can't be modeled more simply. Regulatory and signaling pathways are computationally quite simple, even though they take a lot of effort to decipher and understand. So this seems unlikely.

Molecular models would clearly exhibit AGI, but the computational resources required make them totally impractical.

Thus spiking neuron models with extensions to capture synaptic plasticity represent the highest level of abstraction we might reasonably expect to exhibit AGI. AGI shouldn't require any more computational resources than spiking neuron models. It is simply a question of how those resources are utilized. Today, spiking models can be implemented at human brain scale relatively cheaply (see HardwareOverhang), but we don't yet know how to wire them correctly (see WhatNeuroscientistsDontYetKnow).

AI Policies Wiki: LevelsOfAbstractionOfTheBrainForAGI (last edited 2018-01-14 07:30:59 by GordonIrlam)