Skip to main content

This tutorial covers how to simulate a Hidden Markov Model (HMM) and observe how changing the transition probability and observation noise impact what the samples look like. Then we'll look at how uncertainty increases as we make future predictions without evidence (from observations) and how to gain information from the observations.

 

Overview of this tutorial:

  • Build an HMM in Python and generate sample data
  • Calculate how predictive probabilities propagates in a Markov Chain with no evidence
  • Combine new evidence and prediction from past evidence to estimate latent states
Difficulty level: Beginner
Duration: 4:48
Speaker: : Yicheng Fei

This tutorial covers how to infer a latent model when our states are continuous. Particular attention is paid to the Kalman filter and it's mathematical foundation.

 

Overview of this tutorial:

  • Review linear dynamical systems
  • Learn about and implement the Kalman filter
  • Explore how the Kalman filter can be used to smooth data from an eye-tracking experiment
Difficulty level: Beginner
Duration: 2:38

Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees. It is appropriate for student population ranging from undergraduates to faculty in academic settings and also includes industry professionals. In addition to teaching the technical details of computational methods, Neuromatch Academy also provide a curriculum centered on modern neuroscience concepts taught by leading professors along with explicit instruction on how and why to apply models.

 

This lecture covers multiple topics on dynamical neural modeling and inference and their application to basic neuroscience and neurotechnology design: (1) How to develop multiscale dynamical models and filters? (2) How to study neural dynamics across spatiotemporal scales? (3) How to dissociate and model behaviorally relevant neural dynamics? (4) How to model neural dynamics in response to electrical stimulation input? (5) How to apply these techniques in developing brain-machine interfaces (BMIs) to restore lost motor or emotional function?

Difficulty level: Beginner
Duration: 30:40

This lecture provides an introduction to optimal control, describes open-loop and closed-loop control, and application to motor control.

Difficulty level: Beginner
Duration: 36:23
Speaker: : Maurice Smith

In this tutorial, you will perform a Sequential Probability Ratio Test between two hypotheses HL and HR by running simulations of a Drift Diffusion Model (DDM).

Difficulty level: Beginner
Duration: 4:46
Speaker: : Zhengwei Wu

In this tutorial, you will implement a continuous control task: you will design control inputs for a linear dynamical system to reach a target state. 

Difficulty level: Beginner
Duration: 10:02
Speaker: : Zhengwei Wu

This lecture covers the utility of action: vigor and neuroeconomics of movement and applications to foraging and the marginal value theorem.

Difficulty level: Beginner
Duration: 28:48
Speaker: : Reza Shadmehr

This lecture provides an introduction to a variety of topics in Reinforcement Learning.

Difficulty level: Beginner
Duration: 39:12
Speaker: : Doina Precup

This tutorial presents how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE.

Difficulty level: Beginner
Duration: 6:57
Speaker: : Eric DeWitt

In this tutorial, you will use 'bandits' to understand the fundamentals of how a policy interacts with the learning algorithm in reinforcement learning.

Difficulty level: Beginner
Duration: 6:55
Speaker: : Eric DeWitt

In this tutorial, you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected cumulative future reward.

Difficulty level: Beginner
Duration: 11:16
Speaker: : Marcelo Mattar

In this tutorial, you will implement one of the simplest model-based Reinforcement Learning algorithms, Dyna-Q. You will understand what a world model is, how it can improve the agent's policy, and the situations in which model-based algorithms are more advantageous than their model-free counterparts.

Difficulty level: Beginner
Duration: 9:10
Speaker: : Marcelo Mattar

This lecture highlights up-and-coming issues in the neuroscience of reinforcement learning

Difficulty level: Beginner
Duration: 33:25
Speaker: : Tim Behrens

As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture introduces the FAIR principles, how they relate to the field of computational neuroscience, and the resources available.

Difficulty level: Beginner
Duration: 8:47
Speaker: : Sharon Crook

As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.

Difficulty level: Beginner
Duration: 13:16
Speaker: : Kelly Shen

As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.

Difficulty level: Beginner
Duration: 15:14

As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.

Difficulty level: Beginner
Duration: 14:19
Speaker: : Michael Denker
Course:

This session will include presentations of infrastructure that embrace the FAIR principles developed by members of the INCF Community.

 

This lecture provides an overview of The Virtual Brain Simulation Platform.

 

Difficulty level: Beginner
Duration: 9:36
Speaker: : Petra Ritter

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 5:17
Speaker: : Mike X. Cohen

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 11:37
Speaker: : Mike X. Cohen