According to IBM, ‘BlueMatter, a new algorithm created by IBM researchers in collaboration with Stanford University, exploits the Blue Gene supercomputing architecture in order to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.’
Computers capable of mimicking the human brain’s power and efficiency could be just 10 years off, according to a leading researcher at IBM.
According to the researcher, Dharmendra Modha, the manager of IBM’s cognitive computing initiative, scientists from his company and some of the world’s most prestigious universities have already managed to simulate the computing complexity of the feline cortex, a feat that could augur a day not too far off when it will be possible to ramp up to what the human brain can accomplish.
Last year, IBM and five universities were awarded a DARPA contract to work on a cognitive computing project aimed at eventually achieving that goal. Just a year later, Modha said, his team, working in conjunction with the universities’ scientists, have achieved two major milestones.
The first was a real-time cortical simulation that achieved more than 1 billion spiking neurons, as well as 10 trillion individual learning synapses. According to Modha, that exceeds what a cat’s cortex is capable of.
Second, the scientists created a fresh algorithm they’re calling BlueMatter that is aimed at spelling out the connections between all the human brain’s cortical and sub-cortical locations. That mapping is a critical step, Modha suggested, for a true understanding of how the brain communicates and processes information.
The human brain, Modha said, is fundamentally different from today’s computers in power and size, and he and the many scientists he is working with are eager to learn from the brain how to build new kinds of computing architectures. Part of the reason, he added, is that as our world gets more and more complex, a “tsunami” of data is being produced and analyzing those data demands “a new kind of cognitive system, a brain-like system, to make sense of it.”
To achieve the goal, Modha and his fellow scientists are combining supercomputing, neuroscience, and nanotechnology research to demonstrate what’s possible. The work they’ve done has progressed in just a year from the granting of the DARPA contract to today’s achievements.
Modha said that examples of what could be done with computers working at this scale are realistic analysis of the world’s water supply systems, or financial systems. The idea is to detect causality behind phenomena, and to make those connections quickly and effortlessly, the way the human brain works. Writing such a program using today’s computers would be impossible, he said, but these future computers would be able to quickly distill answers to these kinds of enormous problems.
There’s no promise, of course, that Modha and his colleagues will be able to advance the difference between the power of the cat and human cortexes in the next decade. After all, there’s a difference of a factor of 20 between the two. But he sounded optimistic that a decade is a realistic goal.
But regardless of the timing, the aim is clear: reverse-engineer the human brain and learn its computational algorithms. And then deploy them in a bid to solve some of the world’s most complicated computing problems.