Félix-Antoine Fortin from Calcul Québec gives an introduction to high-performance computing with the Compute Canada network, first providing an overview of use cases for HPC and then a hand-on tutorial. Though some examples might seem specific to the Calcul Québec, all computing clusters in the Compute Canada network share the same software modules and environments.
The lesson was given in the context of the BrainHack School 2020.
The Canadian Open Neuroscience Platform (CONP) Portal is a web interface that facilitates open science for the neuroscience community by simplifying global access to and sharing of datasets and tools. The Portal internalizes the typical cycle of a research project, beginning with data acquisition, followed by data processing with published tools, and ultimately the publication of results with a link to the original dataset.
In this video, Samir Das and Tristan Glatard give a short overview of the main features of the CONP Portal.
Shawn Brown presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance-computing facilities across Canada and around the world.
This talk was given in the context of a Ludmer Centre event in 2019.
In this presentation by the OHBM OpenScienceSIG, Tom Shaw and Steffen Bollmann cover how containers can be useful for running the same software on different platforms and sharing analysis pipelines with other researchers. They demonstrate how to build docker containers from scratch, using Neurodocker, and cover how to use containers on an HPC with singularity.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible, Interoperable, and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been the successes? What is currently very difficult? Where does neuroscience need to go?
This lecture covers FAIR atlases, from their background, their construction, and how they can be created in line with the FAIR principles.
This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information. We will also learn about physiological phenomena of the brain such as synchronicity that gives rise to brain waves.
This tutorial illustrates several ways to approach predictive modeling and machine learning with MATLAB.
A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Introduction to the FAIR Principles and examples of applications of the FAIR Principles in neuroscience. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Much like neuroinformatics, data science uses techniques from computational science to derive meaningful results from large complex datasets. In this session, we will explore the relationship between neuroinformatics and data science, by emphasizing a range of data science approaches and activities, ranging from the development and application of statistical methods, through the establishment of communities and platforms, and through the implementation of open-source software tools. Rather than rigid distinctions, in the data science of neuroinformatics, these activities and approaches intersect and interact in dynamic ways. Together with a panel of cutting-edge neuro-data-scientist speakers, we will explore these dynamics
This lecture covers the description and brief history of data science and its use in neuroinformatics.
Much like neuroinformatics, data science uses techniques from computational science to derive meaningful results from large complex datasets. In this session, we will explore the relationship between neuroinformatics and data science, by emphasizing a range of data science approaches and activities, ranging from the development and application of statistical methods, through the establishment of communities and platforms, and through the implementation of open-source software tools. Rather than rigid distinctions, in the data science of neuroinformatics, these activities and approaches intersect and interact in dynamic ways. Together with a panel of cutting-edge neuro-data-scientist speakers, we will explore these dynamics
This lecture covers how brainlife.io works, and how it can be applied to neuroscience data.