Skip to main content

This video will document the process of visualizing the provenance of each step performed to generate a data object on brainlife.

Difficulty level: Beginner
Duration: 0:21
Speaker: :

This video will document the process of downloading and running the "reproduce.sh" script, which will automatically run all of the steps to generate a data object locally on a user's machine.

Difficulty level: Beginner
Duration: 3:44
Speaker: :

This video will document the process of creating a pipeline rule for batch processing on brainlife.

Difficulty level: Intermediate
Duration: 0:57
Speaker: :

This video will document the process of launching a Jupyter Notebook for group-level analyses directly from brainlife.

Difficulty level: Intermediate
Duration: 0:53
Speaker: :

This brief video walks you through the steps necessary when creating a project on brainlife.io. 

Difficulty level: Beginner
Duration: 1:45
Speaker: :

This brief video rus through how to make an accout on brainlife.io.

Difficulty level: Beginner
Duration: 0:30
Speaker: :

This video will demonstrate how to create and launch a pipeline using FreeSurfer on brainlife.io.

Difficulty level: Beginner
Duration: 0:25
Speaker: :

This video will document how to run a correlation analysis between the gray matter volume of two different structures using the output from brainlife app-freesurfer-stats.

Difficulty level: Beginner
Duration: 1:33
Speaker: :

This video will document the process of importing a dataset archived on OpenNeuro from the Datasets tab into a brainlife project.

Difficulty level: Beginner
Duration: 1:06
Speaker: :

This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.

Difficulty level: Intermediate
Duration: 3:09:12

This lecture discusses how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.

Difficulty level: Beginner
Duration: 13:16
Speaker: : Kelly Shen

In this talk, you will learn how brainlife.io works, and how it can be applied to neuroscience data.

Difficulty level: Beginner
Duration: 10:14
Speaker: : Franco Pestilli

As a part of NeuroHackademy 2020, this lecture delves into cloud computing, focusing on Amazon Web Services. 

Difficulty level: Beginner
Duration: 01:43:59

This talk presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance computing facilities across Canada and around the world.

Difficulty level: Beginner
Duration: 56:07
Speaker: : Shawn Brown

This talk describes the NIH-funded SPARC Data Structure, and how this project navigates ontology development while keeping in mind the FAIR science principles. 

Difficulty level: Beginner
Duration: 25:44
Speaker: : Fahim Imam

This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology. 

Difficulty level: Intermediate
Duration: 33:41

This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs. 

Difficulty level: Intermediate
Duration: 50:18
Speaker: : Jeff Grethe
Course:

This lecture covers structured data, databases, federating neuroscience-relevant databases, and ontologies. 

Difficulty level: Beginner
Duration: 1:30:45
Speaker: : Maryann Martone

This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.

Difficulty level: Beginner
Duration: 14:24
Speaker: : Heidi Kleven

This lecture focuses on ontologies for clinical neurosciences.

Difficulty level: Intermediate
Duration: 21:54