IBM Unveils Cognitive Computing Chips

According to an article at Technology Review, IBM has developed a new type of processor chip intended for use in “cognitive computing” applications.  (IBM’s press release on the technology, called SyNAPSE  for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, is here.)  What makes the new chip special is that it attempts to replicate, in hardware, the structures that carry out processing in a human brain.

… a new microchip made by researchers at IBM represents a landmark. Unlike an ordinary chip, it mimics the functioning of a biological brain—a feat that could open new possibilities in computation.

The brain’s inner workings are quite different from those of a typical computer.  It is a massively parallel processor, containing ~1011 neurons, and ~1014 synapses connecting them.  Each neuron is, in a sense, a combination of processor and memory; as the brain learns, the strengths of the synapse interconnections change.

I’ve talked before about the brain’s ability to solve some very difficult problems, such as facial recognition or understanding speech, seemingly with little effort, while a digital computer can do some things — find the roots of a polynomial, for example  — very much faster than a person can.  IBM has had some success in using conventional computer technology to tackle more “human” problems,like  playing chess and winning on Jeopardy!.  But those successes required a great deal of hardware horsepower.  The Watson system that won the Jeopardy! test match used 10 racks of servers, containing 2880 processor cores, and 16 terabytes of memory.  I have not come across figures on its electricity consumption, but I’m sure it was substantial.  There are also basic physical constraints that limit how much faster and more densely packed circuit elements can become.  We now have processor chips that run at 5GHz clock speeds; but the clock speed of our neurons is about 10 Hz, and the brain runs on about 10 watts.  Speed isn’t everything.

The idea of replicating the processing used by the brain is not new.  It underlies the neural network approach used in artificial intelligence research.  Most of this work has traditionally been done by constructing a model of neurons and synapses in software.  The IBM research group, led by Dharmendra Modha, started out with simulations run a very large-scale computers.

Modha’s group started by modeling a system of mouse-like complexity, then worked up to a rat, a cat, and finally a monkey. Each time they had to switch to a more powerful supercomputer. And they were unable to run the simulations in real time, because of the separation between memory and processor that the new chip designs are intended to overcome.

By placing memory and processing elements in close physical proximity, and by making the “synapse” connections adjustable, the team hopes to be able to run much more complex software faster and more efficiently.  Preliminary estimates are that the new chips might reduce the energy cost of running such software by as much as a factor of 1,000.

The research, which is being funded in part by the Defense Advanced Research Projects Agency [DARPA], is still at an early stage, but it has the potential for some ground-breaking work.  If we had a working model of a brain, we might even manage to understand a bit more about the ones inside out skulls.

The “Wired Science” blog at Wired has an interview with Dr. Modha on the team’s work.

Comments are closed.

%d bloggers like this: