Skip to main content

The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.

Difficulty level: Beginner
Duration: 1:35:08
Speaker: : Petra Ritter

The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.

Difficulty level: Beginner
Duration: 1:49:24
Speaker: : Andreas Spiegler

The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.

Difficulty level: Beginner
Duration: 1:28:53
Speaker: : Julie Courtiol

A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Beginner
Duration: 1:16:36
Speaker: : Tal Yarkoni
Course:

This book was written with the goal of introducing researchers and students in a variety of research fields to the intersection of data science and neuroimaging. This book reflects our own experience of doing research at the intersection of data science and neuroimaging and it is based on our experience working with students and collaborators who come from a variety of backgrounds and have a variety of reasons for wanting to use data science approaches in their work. The tools and ideas that we chose to write about are all tools and ideas that we have used in some way in our own research. Many of them are tools that we use on a daily basis in our work. This was important to us for a few reasons: the first is that we want to teach people things that we ourselves find useful. Second, it allowed us to write the book with a focus on solving specific analysis tasks. For example, in many of the chapters you will see that we walk you through ideas while implementing them in code, and with data. We believe that this is a good way to learn about data analysis, because it provides a connecting thread from scientific questions through the data and its representation to implementing specific answers to these questions. Finally, we find these ideas compelling and fruitful. That’s why we were drawn to them in the first place. We hope that our enthusiasm about the ideas and tools described in this book will be infectious enough to convince the readers of their value.

 

Difficulty level: Intermediate
Duration:
Speaker: :

This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:44
Speaker: : Caterina Gratton

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced
Duration: 50:28
Speaker: : Pierre Bellec

In this presentation by the OHBM OpenScienceSIG, Tom Shaw and Steffen Bollmann cover how containers can be useful for running the same software on different platforms and sharing analysis pipelines with other researchers. They demonstrate how to build docker containers from scratch, using Neurodocker, and cover how to use containers on an HPC with singularity.

 

 

Difficulty level: Beginner
Duration: 01:21:59

Serving as good refresher, Shawn Grooms explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.

 

This compilation is courtesy of freeCodeCamp.

Difficulty level: Beginner
Duration: 01:00:07
Speaker: :

Linear algebra is the branch of mathematics concerning linear equations such as linear functions and their representations through matrices and vector spaces. As such, it underlies a huge variety of analyses in the neurosciences.  This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.

 

This lesson was created by RootMath.

Difficulty level: Beginner
Duration: 01:21:30
Speaker: :
Course:

An introduction to data management, manipulation, visualization, and analysis for neuroscience. Students will learn scientific programming in Python, and use this to work with example data from areas such as cognitive-behavioral research, single-cell recording, EEG, and structural and functional MRI. Basic signal processing techniques including filtering are covered. The course includes a Jupyter Notebook and video tutorials.

 

Difficulty level: Beginner
Duration: 1:09:16
Speaker: : Aaron J. Newman

This lecture covers an Introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model.

Difficulty level: Beginner
Duration: 1:23:01
Speaker: : Gaute Einevoll

Computational models provide a framework for integrating data across spatial scales and for exploring hypotheses about the biological mechanisms underlying neuronal and network dynamics. However, as models increase in complexity, additional barriers emerge to the creation, exchange, and re-use of models. Successful projects have created standards for describing complex models in neuroscience and provide open source tools to address these issues. This lecture provides an overview of these projects and make a case for expanded use of resources in support of reproducibility and validation of models against experimental data.

Difficulty level: Beginner
Duration: 1:00:39
Speaker: : Sharon Crook

Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:28
Speaker: : Elizabeth DuPre

Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Beginner
Duration: 1:25:17
Speaker: : Fernando Perez

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go? This lecture covers the biomedical researcher's perspective on FAIR data sharing and the importance of finding better ways to manage large datasets.

Difficulty level: Beginner
Duration: 10:51
Speaker: : Adam Ferguson

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go? This lecture covers multiple aspects of FAIR neuroscience data: what makes it unique, the challenges to making it FAIR, the importance of overcoming these challenges, and how data governance comes into play.

Difficulty level: Beginner
Duration: 14:56
Speaker: : Damian Eke

Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the processes, benefits, and challenges involved in designing, collecting, and sharing FAIR neuroscience datasets.

Difficulty level: Beginner
Duration: 11:35

Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the benefits and difficulties involved when re-using open datasets, and how metadata is important to the process.

Difficulty level: Beginner
Duration: 11:20
Speaker: : Elizabeth DuPre

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high level principles and practices for making digital objects, including data, software and workflows, Findable, Accessible,  Interoperable and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been successes?  What is currently very difficult? Where does neuroscience need to go?

 

This lecture will provide an overview of Addgene, a tool that embraces the FAIR principles developed by members of the INCF Community. This will include an overview of Addgene, their mission, and available resources.

Difficulty level: Beginner
Duration: 12:05
Speaker: : Joanne Kamens