Skip to main content

This tutorial covers how to apply principal component analysis (PCA) for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.

Overview of this tutorial:

  • Perform PCA on MNIST
  • Calculate the variance explained
  • Reconstruct data with different numbers of PCs
  • (Bonus) Examine denoising using PCA

You can learn more about MNIST dataset here.

Difficulty level: Beginner
Duration: 5:35
Speaker: : Alex Cayco Gajic

This tutorial covers how dimensionality reduction can be useful for visualizing and inferring structure in your data. To do this, we will compare principal component analysis (PCA) with t-SNE, a nonlinear dimensionality reduction method.

Overview of the tutorial:

  • Visualize MNIST in 2D using PCA
  • Visualize MNIST in 2D using t-SNE
Difficulty level: Beginner
Duration: 4:17
Speaker: : Alex Cayco Gajic

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

 

This lecture introduces the concept of Bayesian statistics and explains why Bayesian statistics are relevant to studying the brain.

Difficulty level: Beginner
Duration: 31:38
Speaker: : Paul Schrater

This tutorial provides an introduction to Bayesian statistics and covers developing a Bayesian model for localizing sounds based on audio and visual cues. This model will combine prior information about where sounds generally originate with sensory information about the likelihood that a specific sound came from a particular location. The resulting posterior distribution not only allows us to make optimal decision about the sound's origin, but also lets us quantify how uncertain that decision is. Bayesian techniques are therefore useful normative models: the behavior of human or animal subjects can be compared against these models to determine how efficiently they make use of information.

Overview of this tutorial

  1. Implement a Gaussian distribution
  2. Use Bayes' Theorem to find the posterior from a Gaussian-distributed prior and likelihood.
  3. Change the likelihood mean and variance and observe how posterior changes.
  4. Advanced (optional): Observe what happens if the prior is a mixture of two gaussians?
Difficulty level: Beginner
Duration: 5:13

In this tutorial, we will use the concepts introduced in tutorial 1 as building blocks to explore more complicated sensory integration and ventriloquism! 

 

Overview of tutorial

  1. Learn more about the problem setting, which we will also use in Tutorial 3
  2. Implement a mixture-of-Gaussian prior
  3. Explore how that prior produces more complex posteriors
Difficulty level: Beginner
Duration: 4:22

This tutorial covers computing all the necessary steps to perform model inversion (estimate the model parameters such as 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 that generated data similar to that of a participant). We will describe all the steps of the generative model first, and in the last exercise we will use all these steps to estimate the parameter 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 of a single participant using simulated data.

The generative model will be the same Bayesian model we have been using throughout tutorial 2: a mixture of Gaussian prior (common + independent priors) and a Gaussian likelihood.

Difficulty level: Beginner
Duration: 2:40

This tutorial focuses on Bayesian Decision Theory, which combines the posterior with cost functions that allow us to quantify the potential impact of making a decision or choosing an action based on that posterior. Cost functions are therefore critical for turning probabilities into actions!

 

Overview of this tutorial:

  1. Implement three commonly-used cost functions: mean-squared error, absolute error, and zero-one loss
  2. Discover the concept of expected loss
  3. Choose optimal locations on the posterior that minimize these cost functions
Difficulty level: Beginner
Duration: 5:10

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

 

This lecture focuses on advanced uses of Bayesian statistics for understanding the brain.

Difficulty level: Beginner
Duration: 26:01
Speaker: : Xaq Pitkow

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

 

This lecture provides an introduction to linear systems.

Difficulty level: Beginner
Duration: 30:55
Speaker: : Eric Shea-Brown

This tutorial covers the behavior of dynamical systems, systems that evolve in time, where the rules by which they evolve in time are described precisely by a differential equation. 

Differential equations are equations that express the rate of change of the state variable 𝑥. One typically describes this rate of change using the derivative of 𝑥 with respect to time (𝑑𝑥/𝑑𝑡) on the left hand side of the differential equation: 𝑑𝑥𝑑𝑡=𝑓(𝑥). A common notational short-hand is to write 𝑥˙ for 𝑑𝑥𝑑𝑡. The dot means "the derivative with respect to time".

 

Overview of this tutorial:

  • Explore and understand the behavior of such systems where 𝑥 is a single variable
  • Consider cases where 𝐱 is a state vector representing two variables
Difficulty level: Beginner
Duration: 9:28
Speaker: : Bing Wen Brunton

This tutorial provides an introduction to the Markov process in a simple example where the state transitions are probabilistic.

 

Overview of this tutorial:

  • Understand Markov processes and history dependence
  • Explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters
Difficulty level: Beginner
Duration: 3:24
Speaker: : Bing Wen Brunton

This tutorial builds on how deterministic and stochastic processes can both be a part of a dynamical system by:

  • Simulating random walks
  • Investigating the mean and variance of a Ornstein-Uhlenbeck (OU) process
  • Quantifying the OU process's behavior at equilibrium

 

 

Difficulty level: Beginner
Duration: 2:54
Speaker: : Bing Wen Brunton

The goal of this tutorial is to use the modeling tools and intuitions developed in the previous few tutorials and use them to fit data. The concept is to flip the previous tutorial -- instead of generating synthetic data points from a known underlying process, what if we are given data points measured in time and have to learn the underlying process?

This tutorial is in two sections:

  • Section 1 walks through using regression of data to solve for the coefficient of an OU process from Tutorial 3
  • Section 2 generalizes this auto-regression framework to high-order autoregressive models, and we will try to fit data from monkeys at typewriters
Difficulty level: Beginner
Duration: 5:34
Speaker: : Bing Wen Brunton

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models. 

 

 

This lecture provides a summary of Linear Dynamical Systems  concepts covered in Linear Systems I (Intro Lecture) and Tutorials 1 - 4 and introduces motor neuroscience/neuroengineering, brain machine interface, and applications of dynamical systems

Difficulty level: Beginner
Duration: 33:03
Speaker: : Krishna Shenoy

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models.

 

This course introduces the "hidden states" that neurons and networks have that affect their function, but are hidden from us as experimenters. Today, you'll be working towards understanding how to use graphical models with hidden states to learn about the dynamics in the world that we only have access to from noisy measurements.

Difficulty level: Beginner
Duration: 31:30
Speaker: : Sean Escola

This tutorial introduces the Sequential Probability Ratio Test between two hypotheses 𝐻𝐿 and 𝐻𝑅 by running simulations of a Drift Diffusion Model (DDM). As independent and identically distributed (i.i.d) samples from the true data-generating distribution coming in, we accumulate our evidence linearly until a certain criterion is met before deciding which hypothesis to accept. Two types of stopping criterion/stopping rule will be implemented: after seeing a fixed amount of data, and after the likelihood ratio passes a pre-defined threshold. Due to the noisy nature of observations, there will be a drifting term governed by expected mean output and a diffusion term governed by observation noise.

Overview of this tutorial:

  • Simulate Drift-Diffusion Model with different stopping rules
  • Observe the relation between accuracy and reaction time, get an intuition about the speed/accuracy tradeoff
Difficulty level: Beginner
Duration: 4:46
Speaker: : Yicheng Fei

This tutorial covers how to simulate a Hidden Markov Model (HMM) and observe how changing the transition probability and observation noise impact what the samples look like. Then we'll look at how uncertainty increases as we make future predictions without evidence (from observations) and how to gain information from the observations.

 

Overview of this tutorial:

  • Build an HMM in Python and generate sample data
  • Calculate how predictive probabilities propagates in a Markov Chain with no evidence
  • Combine new evidence and prediction from past evidence to estimate latent states
Difficulty level: Beginner
Duration: 4:48
Speaker: : Yicheng Fei

This tutorial covers how to infer a latent model when our states are continuous. Particular attention is paid to the Kalman filter and it's mathematical foundation.

 

Overview of this tutorial:

  • Review linear dynamical systems
  • Learn about and implement the Kalman filter
  • Explore how the Kalman filter can be used to smooth data from an eye-tracking experiment
Difficulty level: Beginner
Duration: 2:38

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models.

 

This lecture covers multiple topics on dynamical neural modeling and inference and their application to basic neuroscience and neurotechnology design: (1) How to develop multiscale dynamical models and filters? (2) How to study neural dynamics across spatiotemporal scales? (3) How to dissociate and model behaviorally relevant neural dynamics? (4) How to model neural dynamics in response to electrical stimulation input? (5) How to apply these techniques in developing brain-machine interfaces (BMIs) to restore lost motor or emotional function?

Difficulty level: Beginner
Duration: 30:40

This lecture provides an introduction to optimal control, describes open-loop and closed-loop control, and application to motor control.

Difficulty level: Beginner
Duration: 36:23
Speaker: : Maurice Smith