Skip to main content

Building a 1 mm^3 cerebellar module on a computer

Difficulty level

Presentation of a simulation software for spatial model neurons and their networks designed primarily for GPUs.

Topics covered in this lesson

Talk abstract: The cerebellum is thought to form internal models, which simulate the dynamics of physical and/or mental objects, and assist the cerebral cortex for efficient information processing. The cerebellum has a very regular anatomical structure and contains numerous neurons with limited cell types. Neurons in the cerebellum elicit spikes spontaneously in much higher frequencies that those in the cerebral cortex. These properties echo a relationship between a central processing unit (CPU) and a graphics processing unit (GPU); if the cerebral cortex was a CPU, the cerebellum would be a GPU.

I have been developing a spiking network model of the cerebellum based on the known anatomy and physiology. The model is implemented on GPUs for realtime simulation, where realtime simulation means that a simulation of network dynamics in 1 sec completes within 1 sec in the real-world time. The latest version contains 1 million granule cells and is implemented on 2 NVIDIA GeForce TITAN Z boards that are equivalent to 4 TITAN boards, whereas in the old one, only 0.1 million cells were implemented on a single board. Thus, the latest version contains 10x more neurons, and the computer simulation still runs in realtime owing to the multi GPUs. I will explain how we use the multi GPUs for efficient numerical calculation. I will discuss potential applications in which the realtime computing is essential.

Neurons in our model are implemented as point models. On the other hand, neurons in the biological cerebellum have a variety of spatial structures. To elucidate how detailed spatial structures affect the network activity, we need to build a model composed of spatial model neurons. I will introduce our new project on simulation software for spatial model neurons and their networks designed primarily for GPUs.