This introductory lesson welcomes users to the virtual learning series, explaining some of the background behind open-source miniscopes, as well as outlining the rest of the lessons in this course.
This lesson provides an overview of the Miniscope project, explaining the motivation behind the how and why of Miniscope development, why Miniscopes may be useful for researchers, and the differences between previous and current versions.
This lesson will go through the theory and practical techniques for implanting a GRIN lens for imaging in mice.
This lesson provides instruction on how to build a Miniscope and stream data, including an overview of the software involved.
An introduction to data management, manipulation, visualization, and analysis for neuroscience. Students will learn scientific programming in Python, and use this to work with example data from areas such as cognitive-behavioral research, single-cell recording, EEG, and structural and functional MRI. Basic signal processing techniques including filtering are covered. The course includes a Jupyter Notebook and video tutorials.
Hierarchical Event Descriptors (HED) fill a major gap in the neuroinformatics standards toolkit, namely the specification of the nature(s) of events and time-limited conditions recorded as having occurred during time series recordings (EEG, MEG, iEEG, fMRI, etc.). Here, the HED Working Group presents an online INCF workshop on the need for, structure of, tools for, and use of HED annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis.
This lesson introduces concepts and practices surrounding reference atlases for the mouse and rat brains. Additionally, this lesson provides discussion around examples of data systems employed to organize neuroscience data collections in the context of reference atlases as well as analytical workflows applied to the data.
This talk describes the NIH-funded SPARC Data Structure, and how this project navigates ontology development while keeping in mind the FAIR science principles.
This lecture covers structured data, databases, federating neuroscience-relevant databases, and ontologies.
This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.
This lesson gives a description of the BrainHealth Databank, a repository of many types of health-related data, whose aim is to accelerate research, improve care, and to help better understand and diagnose mental illness, as well as develop new treatments and prevention strategies.
This lesson corresponds to slides 46-78 of the PDF below.
This talk goes over Neurobagel, an open-source platform developed for improved dataset sharing and searching.
This lesson describes the current state of brain-computer interface (BCI) standards, including the present obstacles hindering the forward movement of BCI standardization as well as future steps aimed at solving this problem.
This lightning talk gives an outline of the DataLad ecosystem for large-scale collaborations, and how DataLad addresses challenges that may arise in such research cooperations.
In this lightning talk, you will learn about BrainGlobe, an initiative which exists to facilitate the development of interoperable Python-based tools for computational neuroanatomy.
This is the second of three lectures around current challenges and opportunities facing neuroinformatic infrastructure for handling sensitive data.
This lesson provides an overview of how to conceptualize, design, implement, and maintain neuroscientific pipelines in via the cloud-based computational reproducibility platform Code Ocean.
This lesson provides an overview of how to construct computational pipelines for neurophysiological data using DataJoint.
This hands-on tutorial walks you through DataJoint platform, highlighting features and schema which can be used to build robost neuroscientific pipelines.
This lesson provides an introduction to the DataLad, a free and open source distributed data management system that keeps track of your data, creates structure, ensures reproducibility, supports collaboration, and integrates with widely used data infrastructure.