Skip to main content

Tutorial on collaborating with Git and GitHub. This tutorial was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 2:15:50
Speaker: : Elizabeth DuPre

This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:44
Speaker: : Caterina Gratton

Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:28
Speaker: : Elizabeth DuPre

This tutorial was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 1:26:02
Speaker: : Ariel Rokem

Learn how to create a standard extracellular electrophysiology dataset in NWB using Python

Difficulty level: Intermediate
Duration: 23:10
Speaker: : Ryan Ly

Learn how to create a standard calcium imaging dataset in NWB using Python

Difficulty level: Intermediate
Duration: 31:04
Speaker: : Ryan Ly

Learn how to create a standard intracellular electrophysiology dataset in NWB

Difficulty level: Intermediate
Duration: 20:23
Speaker: : Pamela Baker

Learn how to use the icephys-metadata extension to enter meta-data detailing your experimental paradigm

Difficulty level: Intermediate
Duration: 27:18
Speaker: : Oliver Ruebel

Learn how to create a standard extracellular electrophysiology dataset in NWB using MATLAB

Difficulty level: Intermediate
Duration: 45:46
Speaker: : Ben Dichter

Learn how to create a standard calcium imaging dataset in NWB using MATLAB

Difficulty level: Intermediate
Duration: 39:10
Speaker: : Ben Dichter

Learn how to create a standard intracellular electrophysiology dataset in NWB

Difficulty level: Intermediate
Duration: 20:22
Speaker: : Pamela Baker

Overview of the Braintorm package for analyzing extracellular electrophysiology, including preprocessing, spike sorting, trial alignment, and spectrotemporal decomposition

Difficulty level: Intermediate
Duration: 47:47

Overview of the CaImAn package, and demonstration of usage with NWB

Difficulty level: Intermediate
Duration: 44:37

Overview of the SpikeInterface package, including demonstration of data loading, preprocessing, spike sorting, and comparison of spike sorters

Difficulty level: Intermediate
Duration: 1:10:28
Speaker: : Alessio Buccino

Overview of the NWBWidgets package, including coverage of different data types, and information for building custom widgets within this framework

Difficulty level: Intermediate
Duration: 47:15
Speaker: : Ben Dichter

DAQCORD is a framework for the design, documentation and reporting of data curation methods in order to advance the scientific rigour, reproducibility and analysis of the data. This lecture covers the rationale for developing the framework, the process in which the framework was developed, and ends with a presentation of the framework. While the driving use case for DAQCORD was clinical traumatic brain injury research, the framework is applicable to clinical studies in other domains of clinical neuroscience research.

Difficulty level: Intermediate
Duration: 17:08
Speaker: : Ari Ercole
Course:

This book was written with the goal of introducing researchers and students in a variety of research fields to the intersection of data science and neuroimaging. This book reflects our own experience of doing research at the intersection of data science and neuroimaging and it is based on our experience working with students and collaborators who come from a variety of backgrounds and have a variety of reasons for wanting to use data science approaches in their work. The tools and ideas that we chose to write about are all tools and ideas that we have used in some way in our own research. Many of them are tools that we use on a daily basis in our work. This was important to us for a few reasons: the first is that we want to teach people things that we ourselves find useful. Second, it allowed us to write the book with a focus on solving specific analysis tasks. For example, in many of the chapters you will see that we walk you through ideas while implementing them in code, and with data. We believe that this is a good way to learn about data analysis, because it provides a connecting thread from scientific questions through the data and its representation to implementing specific answers to these questions. Finally, we find these ideas compelling and fruitful. That’s why we were drawn to them in the first place. We hope that our enthusiasm about the ideas and tools described in this book will be infectious enough to convince the readers of their value.

 

Difficulty level: Intermediate
Duration:
Speaker: :