Course:

This lecture focuses on advanced uses of Bayesian statistics for understanding the brain.

Difficulty level: Beginner

Duration: 26:01

Speaker: : Xaq Pitkow

Course:

This lecture provides an introduction to linear systems.

Difficulty level: Beginner

Duration: 30:55

Speaker: : Eric Shea-Brown

Course:

This tutorial covers the behavior of dynamical systems, systems that evolve in time, where the rules by which they evolve in time are described precisely by a differential equation.

Differential equations are equations that express the **rate of change** of the state variable 𝑥. One typically describes this rate of change using the derivative of 𝑥 with respect to time (𝑑𝑥/𝑑𝑡) on the left hand side of the differential equation: 𝑑𝑥𝑑𝑡=𝑓(𝑥). A common notational short-hand is to write 𝑥˙ for 𝑑𝑥𝑑𝑡. The dot means "the derivative with respect to time".

Difficulty level: Beginner

Duration: 9:28

Speaker: : Bing Wen Brunton

Course:

This tutorial provides an introduction to the Markov process in a simple example where the state transitions are probabilistic. The aims of this tutorial is to help you understand Markov processes and history dependence, as well as to explore the behavior of a two-state telegraph process and understand how its equilibrium distribution is dependent on its parameters.

Difficulty level: Beginner

Duration: 3:24

Speaker: : Bing Wen Brunton

Course:

This tutorial builds on how deterministic and stochastic processes can both be a part of a dynamical system by simulating random walks, investigating the mean and variance of a Ornstein-Uhlenbeck (OU) process, and quantifying the OU process's behavior at equilibrium.

Difficulty level: Beginner

Duration: 2:54

Speaker: : Bing Wen Brunton

Course:

The goal of this tutorial is to use the modeling tools and intuitions developed in the previous few tutorials and use them to *fit data*. The concept is to flip the previous tutorial -- instead of generating synthetic data points from a known underlying process, what if we are given data points measured in time and have to learn the underlying process?

This tutorial is in two sections:

**Section 1**walks through using regression of data to solve for the coefficient of an OU process from Tutorial 3.**Section 2**generalizes this auto-regression framework to high-order autoregressive models, and we will try to fit data from monkeys at typewriters.

Difficulty level: Beginner

Duration: 5:34

Speaker: : Bing Wen Brunton

Course:

This lecture provides a summary of concepts associated with linear dynamical systems, covered in Linear Systems I (Intro Lecture) and Tutorials 1 - 4, and also introduces motor neuroscience/neuroengineering, brain-machine interfaces, and applications of dynamical systems.

Difficulty level: Beginner

Duration: 33:03

Speaker: : Krishna Shenoy

Course:

This lesson provides an introduction to the Decision Making course, specifically focusing on hidden states in neural systems.

Difficulty level: Beginner

Duration: 31:30

Speaker: : Sean Escola

Course:

This tutorial introduces the *Sequential Probability Ratio Test *between two hypotheses 𝐻𝐿 and 𝐻𝑅 by running simulations of a *Drift Diffusion Model (DDM)*. As independent and identically distributed (*i.i.d*) samples from the true data-generating distribution coming in, we accumulate our evidence linearly until a certain criterion is met before deciding which hypothesis to accept. Two types of stopping criterion/stopping rule will be implemented: after seeing a fixed amount of data, and after the likelihood ratio passes a pre-defined threshold. Due to the noisy nature of observations, there will be a *drifting* term governed by expected mean output and a *diffusion* term governed by observation noise.

Difficulty level: Beginner

Duration: 4:46

Speaker: : Yicheng Fei

Course:

This tutorial covers how to simulate a Hidden Markov Model (HMM) and observe how changing the transition probability and observation noise impacts what the samples look like. Then we'll look at how uncertainty increases as we make future predictions without evidence (from observations) and how to gain information from the observations.

Difficulty level: Beginner

Duration: 4:48

Speaker: : Yicheng Fei

Course:

This tutorial covers how to infer a latent model when our states are continuous. Particular attention is paid to the Kalman filter and its mathematical foundation.

Difficulty level: Beginner

Duration: 2:38

Speaker: : Caroline Haimerl and Byron Galbraith

Course:

This lecture covers multiple topics on dynamical neural modeling and inference and their application to basic neuroscience and neurotechnology design.

Difficulty level: Beginner

Duration: 30:40

Speaker: : Maryam M. Shanechi

Course:

This lecture provides an introduction to optimal control, different types of control, as well as potential applications.

Difficulty level: Beginner

Duration: 36:23

Speaker: : Maurice Smith

Course:

In this tutorial, you will perform a *Sequential Probability Ratio Test* between two hypotheses *HL* and *HR* by running simulations of a *Drift Diffusion Model (DDM)*.

Difficulty level: Beginner

Duration: 4:46

Speaker: : Zhengwei Wu

Course:

In this tutorial, you will implement a continuous control task: you will design control inputs for a linear dynamical system to reach a target state.

Difficulty level: Beginner

Duration: 10:02

Speaker: : Zhengwei Wu

Course:

This lecture covers the utility of action, including the vigor and neuroeconomics of movement and applications to foraging and the marginal value theorem.

Difficulty level: Beginner

Duration: 28:48

Speaker: : Reza Shadmehr

Course:

This lecture provides an introduction to a variety of topics in reinforcement learning.

Difficulty level: Beginner

Duration: 39:12

Speaker: : Doina Precup

Course:

This tutorial presents how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if dopamine represents a "canonical" model-free RPE.

Difficulty level: Beginner

Duration: 6:57

Speaker: : Eric DeWitt

Course:

In this tutorial, you will use 'bandits' to understand the fundamentals of how a policy interacts with the learning algorithm in reinforcement learning.

Difficulty level: Beginner

Duration: 6:55

Speaker: : Eric DeWitt

Course:

In this tutorial, you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected **cumulative** future reward.

Difficulty level: Beginner

Duration: 11:16

Speaker: : Marcelo Mattar

- Clinical neuroinformatics (6)
- Philosophy of Science (5)
- Artificial Intelligence (4)
- Digital brain atlasing (1)
- Neuroimaging (28)
- Neuromorphic engineering (3)
- (-) Ontologies (1)
- Standards and best practices (1)
- Tools (8)
- Workflows (2)
- Animal models (3)
- Assembly 2021 (26)
- Brain-hardware interfaces (2)
- Clinical neuroscience (11)
- International Brain Initiative (2)
- Repositories and science gateways (5)
- Resources (6)
- General neuroscience
(20)
- Cognitive Science (7)
- Cell signaling (3)
- Brain networks (7)
- Glia (1)
- (-) Electrophysiology (9)
- Learning and memory (4)
- Neuroanatomy (4)
- Neurobiology (13)
- Neurodegeneration (1)
- Neuroimmunology (1)
- Neural networks (11)
- Neurophysiology (4)
- (-) Neuropharmacology (1)
- Neuronal plasticity (1)
- Synaptic plasticity (1)

- (-) Computational neuroscience (61)
- Statistics (6)
- (-) Computer Science (4)
- Genomics (9)
- Data science
(8)
- Open science (14)
- Project management (3)
- Education (1)
- Neuroethics (6)