The human brain remains as one of the great frontiers of science â how does this organ upon which we all depend so critically, actually do its job? A great deal is known about the underlying technology â the neuron â and we can observe in vivo brain activity on a number of scales through techniques such as magnetic resonance imaging, neural staining and invasive probing, but this knowledge - a tiny fraction of the information that is actually there - barely starts to tell us how the brain works, from a perspective that we can understand and manipulate. Something is happening at the intermediate levels of processing that we have yet to begin to understand, and the essence of the brain's information processing function probably lies in these intermediate levels. One way to get at these middle layers is to build models of very large systems of spiking neurons, with structures inspired by the increasingly detailed findings of neuroscience, in order to investigate the emergent behaviours, adaptability and fault-tolerance of those systems.
What has changed, and why could we not do this ten years ago? Multi-core processors are now established as the way forward on the desktop, and highly-parallel systems have been the norm for high-performance computing for a considerable time. In a surprisingly short space of time, industry has abandoned the exploitation of Mooreâs Law through ever more complex uniprocessors, and is embracing a 'new' Moore's Law: the number of processor cores on a chip will double roughly every 18 months. If projected over the next 25 years this leads inevitably to the landmark of a million-core processor system. Why wait?
We are building a system containing a million ARM9 cores - not dissimilar to the processor found in many mobile phones. Whilst this is not, in any sense, a powerful core, it possesses aspects that make it ideal for an assembly of the type we are undertaking. With a million cores, we estimate we can sensibly simulate - in real time - the behaviour of a billion neurons. Whilst this is less than 1% of a human brain, in the taxonomy of brain sizes it is certainly not a primitive system, and it should be capable of displaying interesting behaviour.
A number of design axioms of the architecture are radically different to those of conventional computer systems - some would say they are downright heretical. The architecture turns out to be elegantly suited to a surprising number of application arenas, but the flagship application is neural simulation; neurobiology inspired the design.
This biological inspiration draws us to two parallel, synergistic directions of enquiry; significant progress in either direction will represent a major scientific breakthrough: â¢ How can massively parallel computing resources accelerate our understanding of brain function? â¢ How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?