Neural Data Analysis: The Bayesics
Neural Data Analysis: The Bayesics
This lesson discusses Bayesian neuron models and parameter estimation.
Topics covered in this lesson
- Statistics in computational neuroscience, deriving knowledge from limited and/or noisy data.
- Statistical models can convey both knowledge and uncertainty, being explicit about what we do not know.
- The loop of modelling-experiment-modelling.
- Bayesian statistics.
- Posterior and prior probabilities, likelihood.
- Generalized linear model (GLM) - relating stimuli to neural responses.
- Poisson process, the mother of all spike train models.
- Maximizing log-likelihood.
- Time binning and how it affects the maximum likelihood.
- Estimating posteriors.
- Post-spike filters for mimicking different kinds of spiking.
- GLMs model dependency of neural spike rate on time, stimulus, spike history.
- Special case: linear non-linear Poisson neurons are GLMs without history.
- Coupling terms for modelling dependence on other spiking neurons.
- Latent variables.
- Outline of the expectation maximization algorithm.
- Bayesian inference - finding which parameters are consistent with data and prior knowledge
- Likelihood-free inference a.k.a. Approximate Bayesian Computation (ABC) - "should be called intractable likelihood inference".
- Training deep neural networks to approximate posterior distribution from data.
- Applications: which parameters are well constrained by data?
External Links
Prerequisites
- Calculus (integration and differentiation), basic linear algebra (matrices, determinants)
- Some basic transform theory, such as knowing what Fourier transforms do, what a convolution is
Back to the course