This tutorial covers how to infer a latent model when our states are continuous. Particular attention is paid to the Kalman filter and its mathematical foundation.
This lecture covers multiple topics on dynamical neural modeling and inference and their application to basic neuroscience and neurotechnology design.
This lecture provides an introduction to optimal control, different types of control, as well as potential applications.
In this tutorial, you will perform a Sequential Probability Ratio Test between two hypotheses HL and HR by running simulations of a Drift Diffusion Model (DDM).
In this tutorial, you will implement a continuous control task: you will design control inputs for a linear dynamical system to reach a target state.
This lecture covers the utility of action, including the vigor and neuroeconomics of movement and applications to foraging and the marginal value theorem.
This lecture provides an introduction to a variety of topics in reinforcement learning.
This tutorial presents how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if dopamine represents a "canonical" model-free RPE.
In this tutorial, you will use 'bandits' to understand the fundamentals of how a policy interacts with the learning algorithm in reinforcement learning.
In this tutorial, you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected cumulative future reward.
In this tutorial, you will implement one of the simplest model-based reinforcement learning algorithms, Dyna-Q. You will understand what a world model is, how it can improve the agent's policy, and the situations in which model-based algorithms are more advantageous than their model-free counterparts.
This lecture highlights up-and-coming issues in the neuroscience of reinforcement learning.
This lecture covers the needs and challenges involved in creating a FAIR ecosystem for neuroimaging research.
This lecture covers multiple aspects of FAIR neuroscience data: what makes it unique, the challenges to making it FAIR, the importance of overcoming these challenges, and how data governance comes into play.
This lecture introduces the FAIR principles, how they relate to the field of computational neuroscience, and the resources available.
This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
This lecture focuses on the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
This lecture covers the NIDM data format within BIDS to make your datasets more searchable, and how to optimize your dataset searches.
This lecture covers the processes, benefits, and challenges involved in designing, collecting, and sharing FAIR neuroscience datasets.
This lecture covers positron emission tomography (PET) imaging and the Brain Imaging Data Structure (BIDS), and how they work together within the PET-BIDS standard to make neuroscience more open and FAIR.