Research Funding

The human brain provides a working example of human-level intelligence. Therefore attempts to reverse engineer the detailed workings of the human brain carry some risk of enabling the development of human-level or smarter-than-human AI. Many brain-related research areas could have some undesirable consequences, but the risks vary widely.

In Europe the Human Brain Project is a large scale project focused on better understanding the human brain, with the explicit aims of better treating diseases and developing new computing technologies.

In the United States the BRAIN Initiative is also a large scale project focused on better understanding the human brain, with the primary goal of better treating diseases. Most of the sub-projects are focused on developing and using tools to better understand the operation of the brain. An exception is:

Another IARPA project is Knowledge Representation in Neural Systems (KRNS) which seeks to understand how the human brain encodes knowledge.

Separate from the BRAIN Initiative is the $76m (through 2011) DARPA SyNAPSE project which is providing money to IBM's TrueNorth project and HRL's Brain-Machine Intelligence lab to develop spiking neuron inspired hardware.

Research trajectory

The above research could lay the groundwork for brain inspired AI. There are a number of steps required before human-level AI might be achieved:

It seems unlikely human-level AI will be achieved by scanning a complete brain. The cost of that appears prohibitive (see cost estimate in Uploading). Instead it might be achieved by scanning small sections of brain tissue to deduce general wiring principles, and building a computer model based on that.

Complementary research

It may or may not be necessary to understand how the brain encodes information in order to create brain inspired AI. However projects like KRNS that attempt to figure this out are extremely valuable in that understanding how information is encoded by the brain may make it possible to "probe" or "audit" brain inspired AI systems for safety. For biological reasons it is virtually impossible to tell what humans know or are thinking, but with a neuromorphic system in which every synaptic weight is known, and various scenarios can be replayed over and over, it might be much easier.

If programs like MICrONS and TrueNorth might be successful in advancing AI towards human brain like performance it is imperative that they be matched dollar for dollar in funding for AI safety. AI safety is not something that can be added later on.

Alternative research

It might be argued that until we are certain we can develop advanced AI safely that we focus on areas of brain research that have less risk of undesirable consequences than some of the above projects. Some possible examples:

AI Policies Wiki: ResearchFunding (last edited 2017-01-28 01:21:13 by c-73-222-28-52)