1 I am responding to (11) of the Federal Register and (9) of the Whitehouse website RFI: additional information.
3 I wish to address the current state of neuromorphic computing, it trajectory, and implications with respect to human-level or smarter-than-human AI. I do this not out of a belief that the neuromorphic approach will beat the machine learning approach to AI, but because the neuromorphic approach defines an upper bound within which we should reasonably expect human-level or smarter-than-human level AI to emerge. The neuromorphic approach involves learning how the human brain works, and then replicating it in full or in part in silicon. I restrict consideration to spiking neuromorphic models. Such models represent neurons as entities that perform simple computations and then either fire or not. It is highly plausible that the human brain can be described reasonably well by such models, although this is by no means certain.
5 By the way of a preamble I will note that humans dominate the planet, not because we are stronger, but because we are smarter. Left unchecked, the development of smarter-than-human AI, neuromorphic, or not, is likely to result in a serious threat to public well-being. It seems unlikely that human values, such as love and compassion, would carry over to a world with human level or smarter-than-human AI, since these values appear to be evolutionarily encoded forms of genetic self interest, and are not a consequence of intelligence.
7 There are three issues for the neuromorphic approach to prove successful:
8 1. Is it feasible to implement the human brain or something smarter in hardware?
9 2. Do we understand how the human brain works well enough to be able to implement it?
10 3. And if it is feasible, how economical is it to do so?
12 Addressing the first of these issues. The feasibility of implementing a human brain in hardware. In 2014 there were a reported 2.5x10^20 transistors manufactured worldwide, and this number was growing 10 fold every 5 years. A typical human brain contains around 86x10^9 neurons. IBM's neuromorphic chip, TrueNorth, contains approximately 5,400 transistors per real time spiking neuron. Thus there were enough transistors manufactured in 2014 to produce the equivalent of 540,000 human brains. This could be one powerful superintelligence, or many smaller ones. In other words to the extent to which the neuromorphic approach is limited by the performance silicon, these constraints are rapidly disappearing.
14 Addressing the second issue. Do we understand the details of how the human brain works? We understand the big picture, the thalamus, the amygdala, the cerebral cortex, and so on, and roughly what each component does. We also understand the how individual neurons work at a very detailed level. However, at the intermediate level, how neurons are wired together to form assemblies of tens of thousands to billions, we know very little. This is likely to change.
16 The U.S. BRAIN Initiative is a signature big science research initiative primarily with the goal of better understanding the brain in order to better treat diseases. A notable exception to this goal is the 5 year $100m IARPA MICrONS sub-project which "seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain". The technical goal of MICrONS is to produce a wiring diagram, or connectome, for 1mm^3 of cortical tissue (roughly 1 cortical column). It is widely believed that the neocortex is composed of a simple circuit, the cortical column, which is repeatedly replicated. Since the neocortex is believed to be responsible for most cognitive functions. It is thus only a relatively small step from understanding a 1mm^3 of brain tissue to understanding almost the complete brain. We are not talking about needing to scan a complete human brain, the cost of which appears prohibitive for the foreseeable future, but a single 1mm^3, which is technically quite feasible.
18 A plausible research trajectory might be:
19 1. Determine the wiring pattern for one particular cortical column
20 2. Map the wiring pattern of one particular cortical column to a set of general wiring principles
21 3. Develop wiring principles for other brain regions
22 4. Understand the extent to which synapses are dynamic connections that vary over time
23 5. Gain a better understanding of learning, memory, and how the brain encodes information
24 6. Develop human-level or smarter-than-human AI
26 If the present policies of funding such work continue, a reasonable time frame over which such work might occur is perhaps 15-25 years, with the first three steps occurring relatively quickly, steps 4 and 5 being harder to predict, and step 6, the development of a computer as smart or vastly smarter than a human, being straight-forward given the preceding steps.
28 The final issue to address is economics. Today if we knew how the brain was wired we could achieve neuromorphic human-level AI for an estimated cost of around $700/hr. This cost estimate is based on an order of magnitude cost estimate for IBM's neuromorphic TrueNorth chip of $50 if produced in volume. Since this chip is proprietary a precise cost estimate is difficult to determine. Computing costs decline by around a factor of 10 every 8 years. Thus, assuming the current research trajectory, by 2040 we should be prepared for human-level AI for around $0.70/hr, and superintelligence costing $7/hr. If true, this would have broad societal impact.
30 In conclusion. The risks of different neuroscience projects vary widely. I suggest that the U.S. have a policy of funding non-health focused neuroscience projects like MICrONS, only if we can be sure the benefits outweigh the risks associated with the research trajectory they place us on.