Skip to main content

The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.

Difficulty level: Beginner
Duration: 1:35:08
Speaker: : Petra Ritter

The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.

Difficulty level: Beginner
Duration: 1:49:24
Speaker: : Andreas Spiegler

The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.

Difficulty level: Beginner
Duration: 1:28:53
Speaker: : Julie Courtiol

A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Beginner
Duration: 1:16:36
Speaker: : Tal Yarkoni
Course:

Colt Steele provides a comprehensive introduction to the command line and 50 popular Linux commands.  This is a long course (nearly 5 hours) but well worth it if you are going to spend a good part of your career working from a terminal, which is likely if you are interested in flexibility, power, and reproducibility in neuroscience research.

 

This lesson is courtesy of freeCodeCamp.

Difficulty level: Beginner
Duration: 05:00:16
Speaker: :

Félix-Antoine Fortin from Calcul Québec gives an introduction to high-performance computing with the Compute Canada network, first providing an overview of use cases for HPC and then a hand-on tutorial.  Though some examples might seem specific to the Calcul Québec, all computing clusters in the Compute Canada network share the same software modules and environments.

 

The lesson was given in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 02:49:34
Speaker: :

The Canadian Open Neuroscience Platform (CONP) Portal is a web interface that facilitates open science for the neuroscience community by simplifying global access to and sharing of datasets and tools. The Portal internalizes the typical cycle of a research project, beginning with data acquisition, followed by data processing with published tools, and ultimately the publication of results with a link to the original dataset.

 

In this video, Samir Das and Tristan Glatard give a short overview of the main features of the CONP Portal.

Difficulty level: Beginner
Duration: 14:03
Speaker: :

Shawn Brown presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance-computing facilities across Canada and around the world.

 

This talk was given in the context of a Ludmer Centre event in 2019.

 

 

Difficulty level: Beginner
Duration: 56:07
Speaker: :

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go?

 

This lecture covers FAIR atlases, from their background, their construction, and how they can be created in line with the FAIR principles.

Difficulty level: Beginner
Duration: 14:24
Speaker: : Heidi Kleven

As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers

 

This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.

Difficulty level: Beginner
Duration: 15:14
Course:

This session will explore some practical use cases and see whether these affect your repository, your tool, or your research.

Difficulty level: Beginner
Duration: 38:36

Peer Herholz gives a tour of how popular virtualization tools like Docker and Singularity are playing a crucial role in improving reproducibility and enabling high-performance computing in neuroscience.

Difficulty level: Beginner
Duration:
Speaker: :

This lecture will provide an overview of neuroimaging techniques and their clinical applications

Difficulty level: Beginner
Duration: 41:00
Speaker: : Dafna Ben Bashat

A basic introduction to clinical presentation of schizophrenia, its etiology, and current treatment options.

Difficulty level: Beginner
Duration: 51:49

The lecture focuses on rationale for employing neuroimaging methods for movement disorders

Difficulty level: Beginner
Duration: 1:04:04
Speaker: : Bogdan Draganski

NWB: An ecosystem for neurophysiology data standardization

Difficulty level: Beginner
Duration: 29:53
Speaker: : Oliver Ruebel

Learn how to build and share extensions in NWB

Difficulty level: Advanced
Duration: 20:29
Speaker: : Ryan Ly

Learn how to build custom APIs for extension

Difficulty level: Advanced
Duration: 25:40
Speaker: : Andrew Tritt

Learn how to handle writing very large data in PyNWB

Difficulty level: Advanced
Duration: 26:50
Speaker: : Andrew Tritt

Learn how to handle writing very large data in MatNWB

Difficulty level: Advanced
Duration: 16:18
Speaker: : Ben Dichter