Neurokernal: Emulating the drosophila brain on multiple GPUs
In this lecture, the speaker demonstrates Neurokernel's module interfacing feature by using it to integrate independently developed models of olfactory and vision LPUs based upon experimentally obtained connectivity information.
Workshop lecture at Neuroinformatics 2014 in Leiden, The Netherlands
Workshop title: Open collaboration in computational neuroscience
Talk title: Neurokernal: Emulating the drosophila brain on multiple GPUs
Speaker: Aurel A. Lazar
Talk abstract
The brain of the fruit fly Drosophila melanogaster is an extremely attractive model system for reverse engineering the emergent properties of neural circuits because it implements complex sensory-driven behaviors with a nervous system comprising a number of components that is five orders of magnitude smaller than those of mammals. A powerful toolkit of well-developed genetic techniques and advanced electrophysiological recording tools enables the fly's behavior to be experimentally linked to the function of its neural circuitry.
To enable neuroscientists to use these strengths of fly brain research to surmount the structural complexity of its brain and create an accurate model of the entire fly brain, we have developed an open source platform called Neurokernel designed to enable collaborative development of comprehensive fly brain models and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel's model support architecture is motivated by the organization of the fly brain into fewer than 50 functional modules called local processing units (LPUs) that are each characterized by a unique population of local neurons. By defining communication interfaces that specify how spikes and neuron membrane states are transmitted between LPUs, Neurokernel enables researchers to collaboratively develop and refine whole-brain emulations by integration of independently developed processing units. Neurokernel will also empower researchers to leverage additional GPU resources and future improvements in GPU technology to accelerate model execution to the same time scale as a live fly brain; this will enable in vivo validation of Neurokernel-based models against real-time recordings of live fly brain activity.
We will demonstrate Neurokernel's module interfacing feature by using it to integrate independently developed models of olfactory and vision LPUs based upon experimentally obtained connectivity information.