This lesson provides a comprehensive introduction to the command line and 50 popular Linux commands. This is a long introduction (nearly 5 hours), but well worth it if you are going to spend a good part of your career working from a terminal, which is likely if you are interested in flexibility, power, and reproducibility in neuroscience research. This lesson is courtesy of freeCodeCamp.
This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.
In this lesson, while learning about the need for increased large-scale collaborative science that is transparent in nature, users also are given a tutorial on using Synapse for facilitating reusable and reproducible research.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This lesson provides a tutorial on how to handle writing very large data in MatNWB.
This lesson provides an overview of the CaImAn package, as well as a demonstration of usage with NWB.
This lesson gives an overview of the SpikeInterface package, including demonstration of data loading, preprocessing, spike sorting, and comparison of spike sorters.
In this lesson, users will learn about the NWBWidgets package, including coverage of different data types, and information for building custom widgets within this framework.
This demonstration walks through how to import your data into MATLAB.
This lesson provides instruction regarding the various factors one must consider when preprocessing data, preparing it for statistical exploration and analyses.
This tutorial outlines, step by step, how to perform analysis by group and how to do change-point detection.
This tutorial walks through several common methods for visualizing your data in different ways depending on your data type.
This tutorial illustrates several ways to approach predictive modeling and machine learning with MATLAB.
This brief tutorial goes over how you can easily work with big data as you would with any size of data.
In this tutorial, you will learn how to deploy your models outside of your local MATLAB environment, enabling wider sharing and collaboration.
This lesson gives a quick walkthrough the Tidyverse, an "opinionated" collection of R packages designed for data science, including the use of readr, dplyr, tidyr, and ggplot2.
This lesson provides a hands-on tutorial for generating simulated brain data within the EBRAINS ecosystem.