In this tutorial, you will implement a continuous control task: you will design control inputs for a linear dynamical system to reach a target state.
This lecture covers the utility of action: vigor and neuroeconomics of movement and applications to foraging and the marginal value theorem.
This lecture provides an introduction to a variety of topics in Reinforcement Learning.
This tutorial presents how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE.
In this tutorial, you will use 'bandits' to understand the fundamentals of how a policy interacts with the learning algorithm in reinforcement learning.
In this tutorial, you will learn how to act in the more realistic setting of sequential decisions, formalized by Markov Decision Processes (MDPs). In a sequential decision problem, the actions executed in one state not only may lead to immediate rewards (as in a bandit problem), but may also affect the states experienced next (unlike a bandit problem). Each individual action may therefore affect affect all future rewards. Thus, making decisions in this setting requires considering each action in terms of their expected cumulative future reward.
In this tutorial, you will implement one of the simplest model-based Reinforcement Learning algorithms, Dyna-Q. You will understand what a world model is, how it can improve the agent's policy, and the situations in which model-based algorithms are more advantageous than their model-free counterparts.
This lecture highlights up-and-coming issues in the neuroscience of reinforcement learning
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible, Interoperable, and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been the successes? What is currently very difficult? Where does neuroscience need to go?
This lecture covers FAIR atlases, from their background, their construction, and how they can be created in line with the FAIR principles.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture introduces the FAIR principles, how they relate to the field of computational neuroscience, and the resources available.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the NIDM data format within BIDS to make your datasets more searchable, and how to optimize your dataset searches.
Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers positron emission tomography (PET) imaging and the Brain Imaging Data Structure (BIDS), and how they work together within the PET-BIDS standard to make neuroscience more open and FAIR.
The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives.
This lecture contains an overview of electrophysiology data reuse within the EBRAINS ecosystem.
The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives.
This lecture contains an overview of the Distributed Archives for Neurophysiology Data Integration (DANDI) archive, its ties to FAIR and open-source, integrations with other programs, and upcoming features.
The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives. This lecture discusses how to standardize electrophysiology data organization to move towards being more FAIR.
This session will include presentations of infrastructure that embrace the FAIR principles developed by members of the INCF Community.
This lecture provides an overview of The Virtual Brain Simulation Platform.
The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.