Skip to main content

This lesson describes the Neuroscience Gateway , which facilitates access and use of National Science Foundation High Performance Computing resources by neuroscientists.

Difficulty level: Beginner
Duration: 39:27
Speaker: : Subha Sivagnanam

This lesson gives an introduction to high-performance computing with the Compute Canada network, first providing an overview of use cases for HPC and then a hands-on tutorial. Though some examples might seem specific to the Calcul Québec, all computing clusters in the Compute Canada network share the same software modules and environments.

Difficulty level: Beginner
Duration: 02:49:34

This lesson provides a short overview of the main features of the Canadian Open Neuroscience Platform (CONP) Portal, a web interface that facilitates open science for the neuroscience community by simplifying global access to and sharing of datasets and tools. The Portal internalizes the typical cycle of a research project, beginning with data acquisition, followed by data processing with published tools, and ultimately the publication of results with a link to the original dataset.

Difficulty level: Beginner
Duration: 14:03

This talk presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance computing facilities across Canada and around the world.

Difficulty level: Beginner
Duration: 56:07
Speaker: : Shawn Brown

In this talk the speakers will give a brief introduction of the Fenix Infrastructure and Service Offering, before focusing on Data Safety. The speaker will take the participants through the ETHZ-CSCS offering for EBRAINS and all the HBP Communities highlighting the Infrastructure role in a service implementation in respect of Security. Particular attention will be on showing what tools ETHZ-CSCS provides to a Portal/Service provider such as EBRAINS, MIP/HIP, TVB, NRP amongst others. Finally there will be given a quick glimpse into the future and the role that “multi-tenancy” will play.

Difficulty level: Intermediate
Duration: 20:05

This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines. 

Difficulty level: Intermediate
Duration: 1:03:55
Speaker: : Patrik Bey

This lesson provides an overview of how to conceptualize, design, implement, and maintain neuroscientific pipelines in via the cloud-based computational reproducibility platform Code Ocean. 

Difficulty level: Beginner
Duration: 17:01
Speaker: : David Feng

In this workshop talk, you will receive a tour of the Code Ocean ScienceOps Platform, a centralized cloud workspace for all teams. 

Difficulty level: Beginner
Duration: 10:24
Speaker: : Frank Zappulla

This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.

Difficulty level: Beginner
Duration: 54:58
Speaker: : Franco Pestilli

This short video walks you through the steps of publishing a dataset on brainlife, an open-source, free and secure reproducible neuroscience analysis platform.

Difficulty level: Beginner
Duration: 1:18
Speaker: :

This video shows how to use the brainlife.io interface to edit the participants' info file. This file is the ParticipantInfo.json file of the Brain Imaging Data Structure (BIDS).

Difficulty level: Beginner
Duration: 0:34
Speaker: :

This video will document the process of running an app on brainlife, from data staging to archiving of the final data outputs.

Difficulty level: Beginner
Duration: 3:43
Speaker: :

This video will document the process of visualizing the provenance of each step performed to generate a data object on brainlife.

Difficulty level: Beginner
Duration: 0:21
Speaker: :

This video will document the process of downloading and running the "reproduce.sh" script, which will automatically run all of the steps to generate a data object locally on a user's machine.

Difficulty level: Beginner
Duration: 3:44
Speaker: :

This video will document the process of creating a pipeline rule for batch processing on brainlife.

Difficulty level: Intermediate
Duration: 0:57
Speaker: :

This video will document the process of launching a Jupyter Notebook for group-level analyses directly from brainlife.

Difficulty level: Intermediate
Duration: 0:53
Speaker: :

This brief video walks you through the steps necessary when creating a project on brainlife.io. 

Difficulty level: Beginner
Duration: 1:45
Speaker: :

This brief video rus through how to make an accout on brainlife.io.

Difficulty level: Beginner
Duration: 0:30
Speaker: :

This video will document how to run a correlation analysis between the gray matter volume of two different structures using the output from brainlife app-freesurfer-stats.

Difficulty level: Beginner
Duration: 1:33
Speaker: :

This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.

Difficulty level: Intermediate
Duration: 3:09:12