This lecture describes Bayesian memory and learning; how to go from observations to latent variables.
This lesson introduces the concept of constraints on information processing, and how studying these constraints can reveal valuable knowledge about how the brain and other systems function.
This lecture discusses approaching neural systems from an evolutionary perspective.
This lecture describes non-spiking simple neuron models used in artificial neural networks and machine learning.
This lecture provides an introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model.
This lecture describes non-spiking simple neuron models used in artificial neural networks and machine learning.
This lesson provides an overview of plasticity on many levels, including short-term, long-term, metaplasticity, and structural plasticity. The lesson also provides xamples related to modelling of biochemical networks.
Note: The sound uptake is a bit noisy the first few minutes, but gets better from about 5 mins in
This lesson gives an introduction to the modelling of chemical computation in the brain.
This lesson provides an introduction to the role of models in theoretical neuroscience.
This lesson introduces different types of models, model complexity, and how to choose an appropriate model.
This lesson gives an overview of balanced excitatory-inhibitory (E-I) networks, stability, and gain modulation.
In this lesson, you will learn about methods for dimensionality reduction of data, with a focus on factor analysis.
This lesson gives an in-depth look into various types of neuronal networks, as well properties, parameters, and phenomena which characterize them.
In this lesson, you will learn about spiking neuron networks and linear response models.
This lesson discusses Bayesian neuron models and parameter estimation.
This lesson gives an overview of Bayesian memory and learning, as well as how to go from observations to latent variables.
In this lesson, you will learn about how constraints can help us understand how the brain works.
This lesson discusses how to approach neural systems from an evolutionary perspective.
This talk introduces Bayes' theorem, which describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
This lesson recaps why math, in a number of ways, is extremely useful in data science.