This lesson provides a brief overview of the Python programming language, with an emphasis on tools relevant to data scientists.
This tutorial covers the fundamentals of collaborating with Git and GitHub.
This lesson provides a comprehensive introduction to the command line and 50 popular Linux commands. This is a long introduction (nearly 5 hours), but well worth it if you are going to spend a good part of your career working from a terminal, which is likely if you are interested in flexibility, power, and reproducibility in neuroscience research. This lesson is courtesy of freeCodeCamp.
This talk presents state-of-the-art methods for ensuring data privacy with a particular focus on medical data sharing across multiple organizations.
This lecture talks about the usage of knowledge graphs in hospitals and related challenges of semantic interoperability.
This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health.
This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lecture presents an overview of functional brain parcellations, as well as a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation.
Maximize Your Research With Cloud Workspaces is a talk aimed at researchers who are looking for innovative ways to set up and execute their life science data analyses in a collaborative, extensible, open-source cloud environment. This panel discussion is brought to you by MetaCell and scientists from leading universities who share their experiences of advanced analysis and collaborative learning through the Cloud.
In this lecture, you will learn about virtual research environments (VREs) and their technical limitations, (i.e., a computing platform and the software stack behind it) and the security measures which should be considered during implementation.
This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines.
This lesson provides an overview of how to conceptualize, design, implement, and maintain neuroscientific pipelines in via the cloud-based computational reproducibility platform Code Ocean.
In this workshop talk, you will receive a tour of the Code Ocean ScienceOps Platform, a centralized cloud workspace for all teams.
This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.
This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.
This lecture discusses how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.
In this talk, you will learn how brainlife.io works, and how it can be applied to neuroscience data.
As a part of NeuroHackademy 2020, this lecture delves into cloud computing, focusing on Amazon Web Services.
This talk presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance computing facilities across Canada and around the world.