Skip to main content

This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.

Difficulty level: Intermediate
Duration: 47:00
Speaker: : Dimitri Yatsenko

This presentation discusses the impact of data sharing in stroke.

Difficulty level: Intermediate
Duration: 16:33
Speaker: : Valeria Caso

This talks presents an overview of the potential for data federation in stroke research.

Difficulty level: Intermediate
Duration: 21:37

This lesson describes the fundamentals of genomics, from central dogma to design and implementation of GWAS, to the computation, analysis, and interpretation of polygenic risk scores. 

Difficulty level: Intermediate
Duration: 1:28:16
Speaker: : Dan Felsky

This video will document the process of creating a pipeline rule for batch processing on brainlife.

Difficulty level: Intermediate
Duration: 0:57
Speaker: :

This video will document the process of launching a Jupyter Notebook for group-level analyses directly from brainlife.

Difficulty level: Intermediate
Duration: 0:53
Speaker: :

This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.

Difficulty level: Intermediate
Duration: 3:09:12

This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs. 

Difficulty level: Intermediate
Duration: 50:18
Speaker: : Jeff Grethe

This lesson contains practical exercises which accompanies the first few lessons of the Neuroscience for Machine Learners (Neuro4ML) course. 

Difficulty level: Intermediate
Duration: 5:58
Speaker: : Dan Goodman

This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.

Difficulty level: Intermediate
Duration: 2:43
Speaker: : Marcus Ghosh

This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health. 

Difficulty level: Intermediate
Duration: 1:47:22

This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices. 

Difficulty level: Intermediate
Duration: 1:39:04

In this lesson, you will hear about some of the open issues in the field of neuroscience, as well as a discussion about whether neuroscience works, and how can we know?

Difficulty level: Intermediate
Duration: 6:54
Speaker: : Marcus Ghosh

This is an introductory lecture on whole-brain modelling, delving into the various spatial scales of neuroscience, neural population models, and whole-brain modelling. Additionally, the clinical applications of building and testing such models are characterized. 

Difficulty level: Intermediate
Duration: 1:24:44
Speaker: : John Griffiths

This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).

 

This lesson corresponds to slides 65-90 of the PDF below. 

Difficulty level: Intermediate
Duration: 1:15:04
Speaker: : Daniel Hauke

Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.

Difficulty level: Intermediate
Duration: 1:21:38
Speaker: : Dan Felsky

In this lesson, you will learn about one particular aspect of decision making: reaction times. In other words, how long does it take to take a decision based on a stream of information arriving continuously over time?

Difficulty level: Intermediate
Duration: 6:01
Speaker: : Dan Goodman

This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.

Difficulty level: Intermediate
Duration: 1:20:58

This is a hands-on tutorial on PLINK, the open source whole genome association analysis toolset. The aims of this tutorial are to teach users how to perform basic quality control on genetic datasets, as well as to identify and understand GWAS summary statistics. 

Difficulty level: Intermediate
Duration: 1:27:18
Speaker: : Dan Felsky

This is a tutorial on using the open-source software PRSice to calculate a set of polygenic risk scores (PRS) for a study sample. Users will also learn how to read PRS into R, visualize distributions, and perform basic association analyses. 

Difficulty level: Intermediate
Duration: 1:53:34
Speaker: : Dan Felsky