How to gain the recommended background knowledge for success in computational neuroscience

Most who enter the field of computational neuroscience have a prior background in either mathematics, physics, computer science, or (neuro)biology. Since computational neuroscience requires a bit of knowledge from all these fields, with some basic knowledge of neurons and a familiarity with certain types of equations and mathematical concepts, we recommend two different "starting tracks" depending on the student's background before you begin the lectures listed below:

**Intro to computational neuroscience for a computer sci/math background**

The student should learn basic concepts and equations for how neurons generate signals, either a more through introduction via the Cellular Mechanisms of Brain Function course or a quick reminder via the Basic mathematics for computational neuroscience tutorials.

**Intro to computational neuroscience for a biology background**

Here the student is assumed to already have basic knowledge of neurons. We recommend some orientation in mathematics (differential equations, linear algebra, dynamical systems) and computer science. There are a number of possible online courses openly available, for instance the MIT OpenCourseware course on Differential Equations. After that, we recommend a quick orientation on how these mathematics apply to neuroscience by viewing the Basic mathematics for computational neuroscience tutorials.

This lecture covers an Introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model. Speaker: Gaute Einevoll

Introduction to simple abstract models of neurons. Speaker: Geoffrey Hinton.

Introduction to simple spiking neuron models. Author: Zubin Bhuyan, Tezpur University

The ionic basis of the action potential, including the Hodgkin Huxley model. Speaker: Carl Petersen.

Forms of plasticity on many levels - short-term, long-term, metaplasticity, structural plasticity. With examples related to modelling of biochemical networks. Speaker: Upi Bhalla.

Introduction to modelling of chemical computation in the brain. Speaker: Upi Bhalla

Conference presentation on computationally demanding studies of synaptic plasticity on the molecular level. Speaker: Kim "Avrama" Blackwell.

Introduction to stability analysis of neural models. Speaker: Bard Ermentrout

Introduction to stability analysis of neural models. Speaker: Bard Ermentrout

Oscillations and bursting. Speaker: Bard Ermentrout.

Continued exploration of oscillations and bursting. Speaker: Bard Ermentrout.

Weakly coupled oscillators. Speaker: Bard Ermentrout

Continuation of coupled oscillators. Speaker: Bard Ermentrout.

Firing rate models. Speaker: Bard Ermentrout

Pattern generation in visual system hallucinations. Speaker: Bard Ermentrout

Introduction to the role of models in theoretical neuroscience. Speaker: Jakob Macke

Different types of models, model complexity, and how to choose an appropriate model. Speaker: Astrid Prinz.

Balanced E-I networks, stability and gain modulation. Speaker: Kenneth Miller.

Methods for dimensionality reduction of data, with focus on factor analysis. Speaker: Byron Yu.

Rate and spiking Bayes populations. What are the mathematical techniques we can use to understand networks of neurons? Speaker: Julijana Gjorgjieva.

Spiking neuron networks and linear response models. Speaker: Tatjana Tchumatchenko.

Bayesian neuron models and parameter estimation. Speaker: Jakob Macke.

Bayesian memory and learning, how to go from observations to latent variables. Speaker: Máté Lengyel.

Constraints can help us understand how the brain works. Speaker: Simon Laughlin.

Approaching neural systems from an evolutionary perspective. Speaker: Gilles Laurent