The Neuroimaging Data Model (NIDM) is a collection of specification documents that define extensions the W3C PROV standard for the domain of human brain mapping. NIDM uses provenance information as means to link components from different stages of the scientific research process from dataset descriptors and computational workflow, to derived data and publication.
This lesson provides a brief introduction to the Neuroscience Information Exchange (NIX) Format data model, which allows storing fully annotated scientific datasets, i.e., data combined with rich metadata and their relations in a consistent, comprehensive format.
This lecture provides an overview of successful open-access projects aimed at describing complex neuroscientific models, and makes a case for expanded use of resources in support of reproducibility and validation of models against experimental data.
This lecture provides an introduction to the Brain Imaging Data Structure (BIDS), a standard for organizing human neuroimaging datasets.
This lesson provides an overview of Neurodata Without Borders (NWB), an ecosystem for neurophysiology data standardization. The lecture also introduces some NWB-enabled tools.
This lesson outlines Neurodata Without Borders (NWB), a data standard for neurophysiology which provides neuroscientists with a common standard to share, archive, use, and build analysis tools for neurophysiology data.
In February 2020, the Canadian government published its "Roadmap for Open Science" to provide overarching principles and recommendations to guide Open Science activities in federally funded scientific research. It outlines broad guidelines for making science in Canada open to all while respecting privacy, security, ethical considerations, and appropriate intellectual property protection.
This lecture covers the rationale for developing the DAQCORD, a framework for the design, documentation, and reporting of data curation methods in order to advance the scientific rigour, reproducibility, and analysis of data.
This tutorial demonstrates how to use PyNN, a simulator-independent language for building neuronal network models, in conjunction with the neuromorphic hardware system SpiNNaker.
This talk introduces Bayes' theorem, which describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
This lesson recaps why math, in a number of ways, is extremely useful in data science.
This lesson provides an introduction to the lessons in this course that deal with statistics and why they are useful for data science.
In this lesson, users will learn about the importance of exploratory analysis, as well as how statistics enables one to become familiar with and understand one's data.
This lesson goes over graphical data exploration, including motivations for its use as well as practical examples of visualizing data distributions.
In this lesson, users learn about exploratory statistics, and are introduced to various methods for numerical data exploration.
This lesson overview some simple descriptions of statistical data.
This lesson covers the basics of hypothesis testing.
In this lecture, the speaker demonstrates Neurokernel's module interfacing feature by using it to integrate independently developed models of olfactory and vision LPUs based upon experimentally obtained connectivity information.
This lesson describes the Neuroscience Gateway , which facilitates access and use of National Science Foundation High Performance Computing resources by neuroscientists.
This lesson gives an introduction to high-performance computing with the Compute Canada network, first providing an overview of use cases for HPC and then a hands-on tutorial. Though some examples might seem specific to the Calcul Québec, all computing clusters in the Compute Canada network share the same software modules and environments.