Levels of Abstraction of the Brain for AGI
AGI is defined as the ability of a machine to perform any of the intellectual tasks a human brain is capable of performing.
The behavior of the human brain can be described with different levels of fidelity using many different models or abstractions. These abstractions can be ranked according to their expected computational cost. Ignore for now the issue of whether we know how to fully specify a complete instance of a particular abstraction.
Possible abstractions ranked in order of computational cost:
- Models the behavior of the brain as classifying data based on its properties.
- Models the brain as manipulating of a few million symbols and predicates.
- Probability based models
- Model the brain as performing Baysian inference to update an internal model of the world based on observations.
- Models the brain with units which crudely represent neurons.
- The human brain has 100 billion neurons, so for brain scale may need a similar number of units.
- If the neural code is a population code may be able to get by with 10 to 100 times fewer units.
- Models neurons performing discrete firing events.
- Similar computational cost to neural networks if a non-population based neural code.
- 10 to 100 more expensive than neural networks if population based neural code.
- Models neurons using Hodgkin-Huxley electrophysiology and chemical diffusion equations.
- Estimate need 25us time step instead of 1ms timestep in spiking models. Creates a 40 fold slowdown.
- Estimate 10 ions to track. 10 fold slowdown.
- Estimate 200 chemical compartments. 200 fold slowdown.
- Synaptic communication no longer a bottleneck. Perhaps 100 fold speed up.
- Around 1,000 times more expensive than spiking models.
- Models concentrations of not just ions but all the different molecules in each neuron; mostly proteins.
- Estimate 10,000 to 100,000 molecules to track including phosphorylated variants, not 10 ions.
- Around 1,000 to 10,000 times more expensive than electrochemical models.
- Models individual protein molecules, possibly including their position and conformation.
- Estimate weight of brain 1.4kg, say 300g excluding 75% water.
- Estimate average length of a protein at 500 amino acids.
- Estimate molecular weight of an amino acid at 100 Daltons.
Estimate mass of hydrogen atom as 2x10-24g.
Around 3x1021 protein molecules in the brain.
Proteome was only simulating 10,000 to 100,000 molecules in each of 200 compartments. 1014 to 1015 fold slowdown.
- Estimate need 1ns timestep instead of 25us. 25,000 fold slowdown.
Around 1019 times more expensive than proteome models.
Spiking Models are the Highest Known Level of Abstraction for Achieving AGI
What is the highest level of abstraction we can reasonably expect would exhibit AGI? Higher level models might exhibit AGI, but this is currently quite speculative. Lower level models might also exhibit AGI, but would require more computational resources. This highest level of abstraction doesn't necessarily mean AGI is expected to be developed using this abstraction. Only, that it provides us with an upper bound estimate of the minimum computational resources required for AGI. With this much computational resources at hand, achieving AGI would no longer be an issue of needing more resources, but only one of how those resources are deployed.
Classification/regression, symbolism/logic, and probability based models have all met with some success as AI models, but are a long way from achieving AGI today. Since it isn't clear that they represent what the brain actually does, it isn't clear they will ever be able to achieve AGI.
Neural networks are closer in structure to the brain. However their atemporal nature differs markedly from the integrate and fire nature of real neurons. In addition there are substantial doubts over whether the manner in which they are trained, using back propagation and stochastic gradient descent, is similar to how the brain learns. This isn't to say AGI can't be developed using neural networks. They have the advantage of being much easier to analyze mathematically than spiking neuron models. But there can be reasonable doubts over whether neural networks will one day exhibit AGI.
Spiking neuron models are similar to the integrate and fire nature of real neurons. It seems reasonable to view the brain as a spiking neuronal computer. Spiking neurons models lack synaptic plasticity, which is essential to learning. Once the mechanisms of synaptic plasticity are fully understood, it seems synaptic plasticity could be added to spiking neuron models for only moderate cost. Spiking neuron models also fail to capture non-synaptic neurotransmitter levels, such as dopamine. This could be modeled though for virtually zero additional cost. Thus if we knew how to implement them correctly, it seems likely spiking neuron models with a few extensions would exhibit AGI.
Electrochemical models reproduce the behavior of real neurons almost perfectly. They even capture reverse propagation of spikes down dendrites. It seems highly likely that with a few extensions to capture synaptic plasticity and non-synaptic neurotransmitter levels, they would exhibit AGI.
Proteome models would only be necessary for AGI if the molecular pathways leading to synaptic plasticity can't be modeled more simply. Regulatory and signaling pathways are computationally quite simple, even though they take a lot of effort to decipher and understand. So this seems unlikely.
Molecular models would clearly exhibit AGI, but the computational resources required make them totally impractical.
Thus spiking neuron models with extensions to capture synaptic plasticity represent the highest level of abstraction we might reasonably expect to exhibit AGI. AGI shouldn't require any more computational resources than spiking neuron models. It is simply a question of how those resources are utilized. Today, spiking models can be implemented at human brain scale relatively cheaply (see HardwareOverhang), but we don't yet know how to wire them correctly (see WhatNeuroscientistsDontYetKnow).