Medical

Enabling simulations of large parts of the brain

9th March 2018
Enaie Azambuja
0

Brain activity simulations are a critical part of neuroscience research, but advances in this type of computing have been held back by the same thing that frustrates pretty much anything you use a computer for – namely, memory. The future of supercomputing promises immense resources for technologies such as the neuronal network simulator, NEST. The challenge today is to work out how to make optimal use of these resources.

In order to simulate large parts of the brain, at the resolution of single neurons and their connections, more memory is needed than current supercomputers have available.

KTH neuroinformatics researcher Susanne Kunkel recently co-authored a study that breaks down this major barrier to constructing bigger neuronal simulations, while reducing the speed for simulations by more than half for large networks.

The study was published in the journal Frontiers in Neuroinformatics, and the technology will be made available as open source with one of the next releases of the simulation code NEST, a simulator in widespread use by the neuroscientific community.

Kunkel says the work, which also involved scientists from RIKEN in Wako and Kobe, Japan, and the Jülich Research Centre, in Jülich, Germany, aimed to find a new delivery system for neuronal signals in computer simulations – one in which the memory required of each node no longer increases with the size of the network.

The solution addresses a problem that has taken on greater significance as the sizes of neuronal networks have increased to tens of thousands of compute nodes available in modern supercomputing facilities.

In a typical simulation, multiple virtual neurons are created in each node, and they fire off signals to be received by other neurons, just as in an actual brain.

However, because networks are built without enabling the neurons to exchange information at the outset – when they are created and connected – a signal from any neuron in the network has to be sent to all compute nodes and each node has to find out whether there is a target neuron to which it can deliver the signal. This requires a few bits of information for every neuron in the network on every node.

“For a network of 1 billion neurons a large part of the memory is consumed by just these few bits of information per neuron,” Kunkel says. The larger the neuronal network, the greater the amount of computer memory that is required in each node for the extra bits per neuron.

She says that at this scale the delivery system becomes inefficient., Each neuron has only 10,000 target neurons, which means on most nodes it does not have any targets. So each node works through all incoming signals only to find that most of them are irrelevant.

“It’s a bit like sending an insurance salesperson to solicit door-to-door in a neighborhood where most households don’t need insurance,” Kunkel says.

The new delivery system remedies this inefficiency, she says. At the build stage of the network, the technology allows the nodes to exchange information about who needs to send neuronal activity data to whom.

Once this knowledge is available, the exchange of neuronal signals between nodes can be organised in such a way that a node only receives the information it requires. Additional bits for each neuron in the network, signaling presence or absence of target neurons on each node, are no longer necessary, she says.

Because simulation technology that works well on a supercomputer might still perform poorly for smaller neuronal network simulations, Kunkel says that one of the challenges of the project was to develop simulation technology that works efficiently on ordinary laptops, moderately sized clusters, but also on future supercomputers.


Discover more here.

Image credit: KTH.

Featured products

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier