Skip to main content

This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.

Difficulty level: Intermediate
Duration: 1:20:58

This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).

 

This lesson corresponds to slides 65-90 of the PDF below. 

Difficulty level: Intermediate
Duration: 1:15:04
Speaker: : Daniel Hauke

Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.

Difficulty level: Intermediate
Duration: 1:21:38
Speaker: : Dan Felsky

This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines. 

Difficulty level: Intermediate
Duration: 1:03:55
Speaker: : Patrik Bey

This lesson is the first of three hands-on tutorials as part of the workshop Research Workflows for Collaborative Neuroscience. This tutorial goes over how to visualize data with Scanpy, a scalable toolkit for analyzing single-cell gene expression. 

Difficulty level: Intermediate
Duration: 25:26

In this third and final hands-on tutorial from the Research Workflows for Collaborative Neuroscience workshop, you will learn about workflow orchestration using open source tools like DataJoint and Flyte. 

Difficulty level: Intermediate
Duration: 22:36
Speaker: : Daniel Xenes

This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.

Difficulty level: Intermediate
Duration: 47:00
Speaker: : Dimitri Yatsenko

This lecture covers the rationale for developing the DAQCORD, a framework for the design, documentation, and reporting of data curation methods in order to advance the scientific rigour, reproducibility, and analysis of data.

Difficulty level: Intermediate
Duration: 17:08
Speaker: : Ari Ercole

The Medical Informatics Platform (MIP) is a platform providing federated analytics for diagnosis and research in clinical neuroscience research. The federated analytics is possible thanks to a distributed engine that executes computations and transfers information between the members of the federation (hospital nodes). In this talk the speaker will describe the process of designing and implementing new analytical tools, i.e. statistical and machine learning algorithms.  Mr. Sakellariou will further describe the environment in which these federated algorithms run, the challenges and the available tools, the principles that guide its design and the followed general methodology for each new algorithm. One of the most important challenges which are faced is to design these tools in a way that does not compromise the privacy of the clinical data involved. The speaker will show how to address the main questions when designing such algorithms: how to decompose and distribute the computations and what kind of information to exchange between nodes, in order to comply with the privacy constraint mentioned above. Finally, also the subject of validating these federated algorithms will be briefly touched.

Difficulty level: Intermediate
Duration: 20:26
Speaker: : Jason Skellariou