This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.
This lesson contains both a lecture and a tutorial component. The lecture (0:00-20:03 of YouTube video) discusses both the need for intersectional approaches in healthcare as well as the impact of neglecting intersectionality in patient populations. The lecture is followed by a practical tutorial in both Python and R on how to assess intersectional bias in datasets. Links to relevant code and data are found below.
This is a hands-on tutorial on PLINK, the open source whole genome association analysis toolset. The aims of this tutorial are to teach users how to perform basic quality control on genetic datasets, as well as to identify and understand GWAS summary statistics.
This is a tutorial on using the open-source software PRSice to calculate a set of polygenic risk scores (PRS) for a study sample. Users will also learn how to read PRS into R, visualize distributions, and perform basic association analyses.
This is a tutorial introducing participants to the basics of RNA-sequencing data and how to analyze its features using Seurat.
This tutorial demonstrates how to perform cell-type deconvolution in order to estimate how proportions of cell-types in the brain change in response to various conditions. While these techniques may be useful in addressing a wide range of scientific questions, this tutorial will focus on the cellular changes associated with major depression (MDD).
This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.
As a part of NeuroHackademy 2020, Elizabeth DuPre gives a lecture on "Nilearn", a python package that provides flexible statistical and machine-learning tools for brain volumes by leveraging the scikit-learn Python toolbox for multivariate statistics. This includes predictive modelling, classification, decoding, and connectivity analysis.
This video is courtesy of the University of Washington eScience Institute.
This lesson from freeCodeCamp introduces Scikit-learn, the most widely used machine learning Python library.
This lesson provides a comprehensive introduction to the command line and 50 popular Linux commands. This is a long introduction (nearly 5 hours), but well worth it if you are going to spend a good part of your career working from a terminal, which is likely if you are interested in flexibility, power, and reproducibility in neuroscience research. This lesson is courtesy of freeCodeCamp.
This lesson provides a hands-on tutorial for generating simulated brain data within the EBRAINS ecosystem.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.