Skip to main content

This is Tutorial 1 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

Difficulty level: Beginner
Duration: 6:18
Speaker: : Anqi Wu

This is Tutorial 2 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

 

In this tutorial, we will use a different approach to fit linear models that incorporates the random 'noise' in our data.

Difficulty level: Beginner
Duration: 8:00
Speaker: : Anqi Wu

This is Tutorial 3 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

In this tutorial, we will discuss how to gauge how good our estimated model parameters are.

Difficulty level: Beginner
Duration: 5:00
Speaker: : Anqi Wu

This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

In this tutorial, we will generalize the regression model to incorporate multiple features.

Difficulty level: Beginner
Duration: 7:50
Speaker: : Anqi Wu

This is Tutorial 5 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

In this tutorial, we will learn about the bias-variance tradeoff and see it in action using polynomial regression models.

Difficulty level: Beginner
Duration: 6:38
Speaker: : Anqi Wu

This is Tutorial 6 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).

Difficulty level: Beginner
Duration: 5:28
Speaker: : Anqi Wu

This tutorial covers Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). This tutorial also covers a special case of GLMs, logistic regression, and learn how to ensure good model performance. This tutorial is designed to run with retinal ganglion cell spike train data from Uzzell & Chichilnisky 2004.

Difficulty level: Beginner
Duration: 8:09
Speaker: : Anqi Wu

This tutorial covers the implementation of logistic regression, a special case of GLMs used to model binary outcomes. Oftentimes the variable you would like to predict takes only one of two possible values. Left or right? Awake or asleep? Car or bus? In this tutorial, we will decode a mouse's left/right decisions from spike train data.

 

Objectives of this tutorial:

  1. Learn about logistic regression, how it is derived within the GLM theory, and how it is implemented in scikit-learn
  2. Apply logistic regression to decode choices from neural responses
  3. Learn about regularization, including the different approaches and the influence of hyperparameters
Difficulty level: Beginner
Duration: 6:42
Speaker: : Anqi Wu

This tutorial covers multivariate data can be represented in different orthonormal bases. 

 

    Overview of this tutorial:

    • Generate correlated multivariate data
    • Define an arbitrary orthonormal basis
    • Project the data onto the new basis

     

    Difficulty level: Beginner
    Duration: 4:48
    Speaker: : Alex Cayco Gajic

    This tutorial covers how to perform principal component analysis (PCA) by projecting the data onto the eigenvectors of its covariance matrix.

    Overview of this tutorial:

    • Calculate the eigenvectors of the sample covariance matrix.
    • Perform PCA by projecting data onto the eigenvectors of the covariance matrix.
    • Plot and explore the eigenvalues.

    To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this short video (4 minutes) for a geometrical explanation. For a deeper understanding, this in-depth video (17 minutes) provides an excellent basis and is beautifully illustrated.

    Difficulty level: Beginner
    Duration: 6:33
    Speaker: : Alex Cayco Gajic

    This tutorial covers how to apply principal component analysis (PCA) for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.

    Overview of this tutorial:

    • Perform PCA on MNIST
    • Calculate the variance explained
    • Reconstruct data with different numbers of PCs
    • (Bonus) Examine denoising using PCA

    You can learn more about MNIST dataset here.

    Difficulty level: Beginner
    Duration: 5:35
    Speaker: : Alex Cayco Gajic

    This tutorial covers how dimensionality reduction can be useful for visualizing and inferring structure in your data. To do this, we will compare principal component analysis (PCA) with t-SNE, a nonlinear dimensionality reduction method.

    Overview of the tutorial:

    • Visualize MNIST in 2D using PCA
    • Visualize MNIST in 2D using t-SNE
    Difficulty level: Beginner
    Duration: 4:17
    Speaker: : Alex Cayco Gajic

    Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

     

    This lecture introduces the concept of Bayesian statistics and explains why Bayesian statistics are relevant to studying the brain.

    Difficulty level: Beginner
    Duration: 31:38
    Speaker: : Paul Schrater

    This tutorial provides an introduction to Bayesian statistics and covers developing a Bayesian model for localizing sounds based on audio and visual cues. This model will combine prior information about where sounds generally originate with sensory information about the likelihood that a specific sound came from a particular location. The resulting posterior distribution not only allows us to make optimal decision about the sound's origin, but also lets us quantify how uncertain that decision is. Bayesian techniques are therefore useful normative models: the behavior of human or animal subjects can be compared against these models to determine how efficiently they make use of information.

    Overview of this tutorial

    1. Implement a Gaussian distribution
    2. Use Bayes' Theorem to find the posterior from a Gaussian-distributed prior and likelihood.
    3. Change the likelihood mean and variance and observe how posterior changes.
    4. Advanced (optional): Observe what happens if the prior is a mixture of two gaussians?
    Difficulty level: Beginner
    Duration: 5:13

    In this tutorial, we will use the concepts introduced in tutorial 1 as building blocks to explore more complicated sensory integration and ventriloquism! 

     

    Overview of tutorial

    1. Learn more about the problem setting, which we will also use in Tutorial 3
    2. Implement a mixture-of-Gaussian prior
    3. Explore how that prior produces more complex posteriors
    Difficulty level: Beginner
    Duration: 4:22

    This tutorial covers computing all the necessary steps to perform model inversion (estimate the model parameters such as 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 that generated data similar to that of a participant). We will describe all the steps of the generative model first, and in the last exercise we will use all these steps to estimate the parameter 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 of a single participant using simulated data.

    The generative model will be the same Bayesian model we have been using throughout tutorial 2: a mixture of Gaussian prior (common + independent priors) and a Gaussian likelihood.

    Difficulty level: Beginner
    Duration: 2:40

    This tutorial focuses on Bayesian Decision Theory, which combines the posterior with cost functions that allow us to quantify the potential impact of making a decision or choosing an action based on that posterior. Cost functions are therefore critical for turning probabilities into actions!

     

    Overview of this tutorial:

    1. Implement three commonly-used cost functions: mean-squared error, absolute error, and zero-one loss
    2. Discover the concept of expected loss
    3. Choose optimal locations on the posterior that minimize these cost functions
    Difficulty level: Beginner
    Duration: 5:10

    Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

     

    This lecture focuses on advanced uses of Bayesian statistics for understanding the brain.

    Difficulty level: Beginner
    Duration: 26:01
    Speaker: : Xaq Pitkow

    Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

     

    This lecture provides an introduction to linear systems.

    Difficulty level: Beginner
    Duration: 30:55
    Speaker: : Eric Shea-Brown

    This tutorial covers the behavior of dynamical systems, systems that evolve in time, where the rules by which they evolve in time are described precisely by a differential equation. 

    Differential equations are equations that express the rate of change of the state variable 𝑥. One typically describes this rate of change using the derivative of 𝑥 with respect to time (𝑑𝑥/𝑑𝑡) on the left hand side of the differential equation: 𝑑𝑥𝑑𝑡=𝑓(𝑥). A common notational short-hand is to write 𝑥˙ for 𝑑𝑥𝑑𝑡. The dot means "the derivative with respect to time".

     

    Overview of this tutorial:

    • Explore and understand the behavior of such systems where 𝑥 is a single variable
    • Consider cases where 𝐱 is a state vector representing two variables
    Difficulty level: Beginner
    Duration: 9:28
    Speaker: : Bing Wen Brunton