This lesson continues with the second workshop on reproducible science, focusing on additional open source tools for researchers and data scientists, such as the R programming language for data science, as well as associated tools like RStudio and R Markdown. Additionally, users are introduced to Python and iPython notebooks, Google Colab, and are given hands-on tutorials on how to create a Binder environment, as well as various containers in Docker and Singularity.
This lesson provides a brief overview of the Python programming language, with an emphasis on tools relevant to data scientists.
In this lesson, users can follow along as a spaghetti script written in MATLAB is turned into understandable and reusable code living happily in a powerful GitHub repository.
This lesson gives a quick walkthrough the Tidyverse, an "opinionated" collection of R packages designed for data science, including the use of readr, dplyr, tidyr, and ggplot2.
This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
In this lesson, you will hear about the current challenges regarding data management, as well as policies and resources aimed to address them.
This lecture covers the NIDM data format within BIDS to make your datasets more searchable, and how to optimize your dataset searches.
This lecture covers positron emission tomography (PET) imaging and the Brain Imaging Data Structure (BIDS), and how they work together within the PET-BIDS standard to make neuroscience more open and FAIR.
This lecture contains an overview of electrophysiology data reuse within the EBRAINS ecosystem.
This lecture contains an overview of the Distributed Archives for Neurophysiology Data Integration (DANDI) archive, its ties to FAIR and open-source, integrations with other programs, and upcoming features.
This lecture discusses how to standardize electrophysiology data organization to move towards being more FAIR.
This session discussed the secret life of your dataset metadata: the ways in which, for many years to come, it will work non-stop to foster the visibility, reach, and impact of your work. We explored how metadata will help your dataset travel through the global research infrastructure, and how data repositories and discovery services can use this metadata to help launch your dataset into the world.
This lesson provides information on developing data management plans (DMPs), including an overview of how DMPs contribute to effective research efforts, as well as specific development resources and DMP examples.
In this session, participants will take an in-depth look at the newly launched DMP Assistant 2.0, including all of its enhanced key features for both end-users and institutional administrators, as well as a brief look at the future of the platform.
This lesson provides a short overview of the main features of the Canadian Open Neuroscience Platform (CONP) Portal, a web interface that facilitates open science for the neuroscience community by simplifying global access to and sharing of datasets and tools. The Portal internalizes the typical cycle of a research project, beginning with data acquisition, followed by data processing with published tools, and ultimately the publication of results with a link to the original dataset.
This lesson discusses the need for and approaches to integrating data across the various temporal and spatial scales in which brain activity can be measured.
This lesson consists of lecture and tutorial components, focusing on resources and tools which facilitate multi-scale brain modeling and simulation.
In this talk, challenges of handling complex neuroscientific data are discussed, as well as tools and services for the annotation, organization, storage, and sharing of these data.