Difficulties experienced in understanding machine learning techniques often stem from lack of clarity concerning more basic statistical models and fundamental considerations, including the various regression models that can all be subsumed under the General Linear Model. These courses will provide a refresher on the basics of the General Linear Model and various fitting approaches that fall under its umbrella, collectively showing how 'traditional' inferential statistics form the basis for machine learning.
GLM, Regression Models, and Latent Variables
This lecture provides an overview of the generalized linear models (GLM) course, originally a part of the Neuromatch Academy (NMA), an interactive online summer school held in 2020. NMA provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This is a tutorial covering Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). The data you will be using was published by Uzzell & Chichilnisky 2004.
This tutorial covers the implementation of logistic regression, a special case of GLMs used to model binary outcomes. In this tutorial, we will decode a mouse's left/right decisions from spike train data.
This lecture further develops the concepts introduced in Machine Learning I. This lecture is part of the Neuromatch Academy (NMA), an interactive online computational neuroscience summer school held in 2020.
This lesson is part 1 of 2 of a tutorial on statistical models for neural data.
This lesson is part 2 of 2 of a tutorial on statistical models for neural data.