This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
In this tutorial on simulating whole-brain activity using Python, participants can follow along using corresponding code and repositories, learning the basics of neural oscillatory dynamics, evoked responses and EEG signals, ultimately leading to the design of a network model of whole-brain anatomical connectivity.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lesson provides an overview of Jupyter notebooks, Jupyter lab, and Binder, as well as their applications within the field of neuroimaging, particularly when it comes to the writing phase of your research.
This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.
This lesson provides a brief introduction to the Computational Modeling of Neuronal Plasticity.
In this lesson, you will be introducted to a type of neuronal model known as the leaky integrate-and-fire (LIF) model.
This lesson goes over various potential inputs to neuronal synapses, loci of neural communication.
This lesson describes the how and why behind implementing integration time steps as part of a neuronal model.
In this lesson, you will learn about neural spike trains which can be characterized as having a Poisson distribution.
This lesson covers spike-rate adaptation, the process by which a neuron's firing pattern decays to a low, steady-state frequency during the sustained encoding of a stimulus.
This lesson provides a brief explanation of how to implement a neuron's refractory period in a computational model.
In this lesson, you will learn a computational description of the process which tunes neuronal connectivity strength, spike-timing-dependent plasticity (STDP).
This lesson reviews theoretical and mathematical descriptions of correlated spike trains.
This lesson investigates the effect of correlated spike trains on spike-timing dependent plasticity (STDP).
This lesson goes over synaptic normalisation, the homeostatic process by which groups of weighted inputs scale up or down their biases.
In this lesson, you will learn about the intrinsic plasticity of single neurons.