Skip to main content

This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.

Difficulty level: Intermediate
Duration: 1:20:58

This is a hands-on tutorial on PLINK, the open source whole genome association analysis toolset. The aims of this tutorial are to teach users how to perform basic quality control on genetic datasets, as well as to identify and understand GWAS summary statistics. 

Difficulty level: Intermediate
Duration: 1:27:18
Speaker: : Dan Felsky

This is a tutorial on using the open-source software PRSice to calculate a set of polygenic risk scores (PRS) for a study sample. Users will also learn how to read PRS into R, visualize distributions, and perform basic association analyses. 

Difficulty level: Intermediate
Duration: 1:53:34
Speaker: : Dan Felsky

This is a continuation of the talk on the cellular mechanisms of neuronal communication, this time at the level of brain microcircuits and associated global signals like those measureable by electroencephalography (EEG). This lecture also discusses EEG biomarkers in mental health disorders, and how those cortical signatures may be simulated digitally.

Difficulty level: Intermediate
Duration: 1:11:04
Speaker: : Etay Hay

This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health. 

Difficulty level: Intermediate
Duration: 1:47:22

This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices. 

Difficulty level: Intermediate
Duration: 1:39:04

This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).

 

This lesson corresponds to slides 65-90 of the PDF below. 

Difficulty level: Intermediate
Duration: 1:15:04
Speaker: : Daniel Hauke

This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines. 

Difficulty level: Intermediate
Duration: 1:03:55
Speaker: : Patrik Bey

This lecture provides an introduction to the Brain Imaging Data Structure (BIDS), a standard for organizing human neuroimaging datasets.

Difficulty level: Intermediate
Duration: 56:49

This tutorial covers the fundamentals of collaborating with Git and GitHub.

Difficulty level: Intermediate
Duration: 2:15:50
Speaker: : Elizabeth DuPre

This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks. 

Difficulty level: Intermediate
Duration: 50:44
Speaker: : Caterina Gratton

This lesson provides an overview of Jupyter notebooks, Jupyter lab, and Binder, as well as their applications within the field of neuroimaging, particularly when it comes to the writing phase of your research. 

Difficulty level: Intermediate
Duration: 50:28
Speaker: : Elizabeth DuPre

In this lesson, you will learn about the Python project Nipype, an open-source, community-developed initiative under the umbrella of NiPy. Nipype provides a uniform interface to existing neuroimaging software and facilitates interaction between these packages within a single workflow.

Difficulty level: Intermediate
Duration: 1:25:05
Speaker: : Satrajit Ghosh

This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.

Difficulty level: Intermediate
Duration: 3:09:12

This lecture provides an introduction to entropy in general, and multi-scale entropy (MSE) in particular, highlighting the potential clinical applications of the latter. 

Difficulty level: Intermediate
Duration: 39:05
Speaker: : Jil Meier

This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.  

Difficulty level: Intermediate
Duration: 1:40:52
Speaker: : Paul Triebkorn

This lecture provides an general introduction to epilepsy, as well as why and how TVB can prove useful in building and testing epileptic models. 

Difficulty level: Intermediate
Duration: 37:12
Speaker: : Julie Courtiol

This lecture covers the rationale for developing the DAQCORD, a framework for the design, documentation, and reporting of data curation methods in order to advance the scientific rigour, reproducibility, and analysis of data.

Difficulty level: Intermediate
Duration: 17:08
Speaker: : Ari Ercole
Course:

This book was written with the goal of introducing researchers and students in a variety of research fields to the intersection of data science and neuroimaging. This book reflects our own experience of doing research at the intersection of data science and neuroimaging and it is based on our experience working with students and collaborators who come from a variety of backgrounds and have a variety of reasons for wanting to use data science approaches in their work. The tools and ideas that we chose to write about are all tools and ideas that we have used in some way in our own research. Many of them are tools that we use on a daily basis in our work. This was important to us for a few reasons: the first is that we want to teach people things that we ourselves find useful. Second, it allowed us to write the book with a focus on solving specific analysis tasks. For example, in many of the chapters you will see that we walk you through ideas while implementing them in code, and with data. We believe that this is a good way to learn about data analysis, because it provides a connecting thread from scientific questions through the data and its representation to implementing specific answers to these questions. Finally, we find these ideas compelling and fruitful. That’s why we were drawn to them in the first place. We hope that our enthusiasm about the ideas and tools described in this book will be infectious enough to convince the readers of their value.

 

Difficulty level: Intermediate
Duration:
Speaker: :
Course:

This Jupyter Book is a series of interactive tutorials about quantitative T1 mapping, powered by qMRLab. Most figures are generated with Plot.ly – you can play with them by hovering your mouse over the data, zooming in (click and drag) and out (double click), moving the sliders, and changing the drop-down options. To view the code that was used to generate the figures in this blog post, hover your cursor in the top left corner of the frame that contains the tutorial and click the checkbox “All cells” in the popup that appears.

Jupyter Lab notebooks of these tutorials are also available through MyBinder, and inline code modification inside the Jupyter Book is provided by Thebelab. For both options, you can modify the code, change the figures, and regenerate the html that was used to create the tutorial below. This Jupyter Book also uses a Script of Scripts (SoS) kernel, allowing us to process the data using qMRLab in MATLAB/Octave and plot the figures with Plot.ly using Python, all within the same Jupyter Notebook.

Difficulty level: Intermediate
Duration:
Speaker: :