Spiking Neuron Models

Spiking neuron models simulate a neuron as an entity that receives inputs from dendrites, integrates them, and then makes a binary decision whether to fire. Firing results in messages being sent to post synaptic neurons. Spiking neurons typically operate with a simulated time step of around 1ms.

Neural architecture

Hardware does not appear to be the bottleneck to human brain level performance. See also HardwareOverhang.

What is missing?

Progress is intended to be made in these areas, especially the first area, by the Human Brain Project and the BRAIN Initiative.

Communication not FLOPS

Communication of firing events from a neuron to its post synaptic neurons is the bottleneck for spiking neuron models, not floating point performance (FLOPS). With around 1,000 synapses per neuron, communicating the firing event to post synaptic neurons swamps the time taken in deciding whether to fire:

Thus the communication costs exceed the computation costs by a factor of at least 100 today. This points to the importance of non-Von Neumann architectures such as TrueNorth for spiking neuron models.

Note that the concerns raised here are not relevant to deep learning NeuralNetworks because most layers in deep learning networks only have a few inputs per unit.

A word of warning on FLOPS

FLOPS are commonly measured using the Linpack benchmark which doesn't stress the memory/communication hierarchy. Making matters worse Intel's version of Linpack uses vector instruction extensions greatly boosting performance over regular Linpack. Vector extensions are useless for synaptic communication. The impact of vector extensions can be seen by observing an Amazon EC2 c4.xlarge instance obtains 46GFLOPS on the Intel Linpack benchmark, despite having a single core and a clock rate of only 2.9GHz. It is achieving 16 FLOPs per cycle! Focusing on Intel's Linpack measured FLOPS understates the computing needs for a spiking neuron simulation by perhaps a factor of 1,600 today. This number is likely to increase in the future as main memory latencies aren't keeping up with system clock rates, and further vector optimization of Linpack occurs.

AI Policies Wiki: SpikingModels (last edited 2018-01-14 07:12:11 by GordonIrlam)