This talk covers the Human Connectome Project, which aims to provide an unparalleled compilation of neural data, an interface to graphically navigate this data, and the opportunity to achieve never before realized conclusions about the living human brain.
This tutorial provides instruction on how to simulate brain tumors with TVB (reproducing publication: Marinazzo et al. 2020 Neuroimage). This tutorial comprises a didactic video, jupyter notebooks, and full data set for the construction of virtual brains from patients and health controls.
The tutorial on modelling strokes in TVB includes a didactic video and jupyter notebooks (reproducing publication: Falcon et al. 2016 eNeuro).
This lesson introduces population models and the phase plane, and is part of the The Virtual Brain (TVB) Node 10 Series, a 4-day workshop dedicated to learning about the full brain simulation platform TVB, as well as brain imaging, brain simulation, personalised brain models, and TVB use cases.
In this tutorial, you will learn how to run a typical TVB simulation.
This lesson introduces TVB-multi-scale extensions and other TVB tools which facilitate modeling and analyses of multi-scale data.
This tutorial introduces The Virtual Mouse Brain (TVMB), walking users through the necessary steps for performing simulation operations on animal brain data.
In this tutorial, you will learn the necessary steps in modeling the brain of one of the most commonly studied animals among non-human primates, the macaque.
This lecture delves into cortical (i.e., surface-based) brain simulations, as well as subcortical (i.e., deep brain) stimulations, covering the definitions, motivations, and implementations of both.
This lecture provides an introduction to entropy in general, and multi-scale entropy (MSE) in particular, highlighting the potential clinical applications of the latter.
This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.
In this lecture, you will learn about various neuroinformatic resources which allow for 3D reconstruction of brain models.
This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
This lecture focuses on the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
This lecture discusses the FAIR principles as they apply to electrophysiology data and metadata, the building blocks for community tools and standards, platforms and grassroots initiatives, and the challenges therein.
This session provides users with an introduction to tools and resources that facilitate the implementation of FAIR in their research.
This session will include presentations of infrastructure that embrace the FAIR principles developed by members of the INCF Community.
This lecture provides an overview of The Virtual Brain Simulation Platform.
This lesson consists of a demonstration of the BRIAN Simulator. BRIAN is a free, open-source simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms, and is designed to be easy to learn and use, highly flexible, and easily extensible.
This lesson provides a demonstration of NeuroFedora, a volunteer-driven initiative to provide a ready-to-use Fedora-based free and open-source software platform for neuroscience. By making the tools used in the scientific process easier to use, NeuroFedora aims to aid reproducibility, data sharing, and collaboration in the research community.The CompNeuro Fedora Lab was specially to enable computational neuroscience.
This lesson provides an introduction and live demonstration of neurolib, a computational framework for simulating coupled neural mass models written in Python. Neurolib provides a simulation and optimization framework which allows you to easily implement your own neural mass model, simulate fMRI BOLD activity, analyse the results and fit your model to empirical data.