This talk describes the relevance and power of using brain atlases as part of one's data integration pipeline.
In this hands-on session, you will learn how to explore and work with DataLad datasets, containers, and structures using Jupyter notebooks.
This lesson provides a thorough description of neuroimaging development over time, both conceptually and technologically. You will learn about the fundamentals of imaging techniques such as MRI and PET, as well as how the resultant data may be used to generate novel data visualization schemas.
This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.
This video will demonstrate how to create and launch a pipeline using FreeSurfer on brainlife.io.
This lecture covers the description and characterization of an input-output relationship in a information-theoretic context.
In this tutorial, you will learn the basic features of uploading and versioning your data within OpenNeuro.org.
This tutorial shows how to share your data in OpenNeuro.org.
Following the previous two tutorials on uploading and sharing data with OpenNeuro.org, this tutorial briefly covers how to run various analyses on your datasets.
This lesson provides instruction on how to infer results from incomplete data.
This lesson provides instruction on finding parameter values, computing confidence levels, and other various statistical methods employed in data investigation.
In this lesson, statistical methods and tools are described for estimating parameters in your dataset.
This lesson covers how to measure the correspondece between data and model.
In this lesson, you will learn the concepts behind choosing useful variables, as well as various analyses and tools to do so.
This lesson goes over some of the common problems in statistical modeling.
This lesson continues describing some of the common problems in statistical modelling, particularly when it comes to model validation.
You don't have to be a wizard to do statistics!
This lesson provides an overview of possible follow-up courses and subjects from the same publisher.
This talk highlights a set of platform technologies, software, and data collections that close and shorten the feedback cycle in research.
This lesson gives a quick walkthrough the Tidyverse, an "opinionated" collection of R packages designed for data science, including the use of readr, dplyr, tidyr, and ggplot2.