This lesson is the first part of a three-part series on the development of neuroinformatic infrastructure to ensure compliance with European data privacy standards and laws.
This lesson gives a quick introduction to the rest of this course, Research Workflows for Collaborative Neuroscience.
In this workshop talk, you will receive a tour of the Code Ocean ScienceOps Platform, a centralized cloud workspace for all teams.
This talk describes approaches to maintaining integrated workflows and data management schema, taking advantage of the many open source, collaborative platforms already existing.
In this third and final hands-on tutorial from the Research Workflows for Collaborative Neuroscience workshop, you will learn about workflow orchestration using open source tools like DataJoint and Flyte.
This lesson consists of a panel discussion, wrapping up the INCF Neuroinformatics Assembly 2023 workshop Research Workflows for Collaborative Neuroscience.
This lesson provides an introduction to the DataLad, a free and open source distributed data management system that keeps track of your data, creates structure, ensures reproducibility, supports collaboration, and integrates with widely used data infrastructure.
This lesson introduces several open science tools like Docker and Apptainer which can be used to develop portable and reproducible software environments.
In this hands-on session, you will learn how to explore and work with DataLad datasets, containers, and structures using Jupyter notebooks.
This talk describes the challenges in sharing personal, and in particular, health data, such as data anonymization and maintaining GDPR compliance.
This lecture provides a detailed description of how to incorporate HED annotation into your neuroimaging data pipeline.
In this lesson, you will learn how to understand data management plans and why data sharing is important.
This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.
This quick visual walkthrough presents the steps required in uploading data into a brainlife project using the graphical user interface (GUI).
This video will document the process of uploading data into a brainlife project using ezBIDS.
This short video walks you through the steps of publishing a dataset on brainlife, an open-source, free and secure reproducible neuroscience analysis platform.
This video will document the process of visualizing the provenance of each step performed to generate a data object on brainlife.
This video will document the process of downloading and running the "reproduce.sh" script, which will automatically run all of the steps to generate a data object locally on a user's machine.
This video will document the process of creating a pipeline rule for batch processing on brainlife.
This lesson describes and shows four different ways one may upload their data to brainlife.io.