Skip to main content

The practical usage of The Virtual brain in its graphical user interface and via python scripts is introduced. In the graphical user interface, you are guided through its data repository, simulator, phase plane exploration tool, connectivity editor, stimulus generator and the provided analyses. The implemented iPython notebooks of TVB are presented, and since they are public, can be used for further exploration of The Virtual brain.

Difficulty level: Beginner
Duration: 1:12:24
Speaker: : Paul Triebkorn

Tutorial on collaborating with Git and GitHub. This tutorial was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 2:15:50
Speaker: : Elizabeth DuPre
Course:

Colt Steele provides a comprehensive introduction to the command line and 50 popular Linux commands.  This is a long course (nearly 5 hours) but well worth it if you are going to spend a good part of your career working from a terminal, which is likely if you are interested in flexibility, power, and reproducibility in neuroscience research.

 

This lesson is courtesy of freeCodeCamp.

Difficulty level: Beginner
Duration: 05:00:16
Speaker: :

This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:44
Speaker: : Caterina Gratton
Course:

Neuronify is an educational tool meant to create intuition for how neurons and neural networks behave. You can use it to combine neurons with different connections, just like the ones we have in our brain, and explore how changes on single cells lead to behavioral changes in important networks. Neuronify is based on an integrate-and-fire model of neurons. This is one of the simplest models of neurons that exist. It focuses on the spike timing of a neuron and ignores the details of the action potential dynamics. These neurons are modeled as simple RC circuits. When the membrane potential is above a certain threshold, a spike is generated and the voltage is reset to its resting potential. This spike then signals other neurons through its synapses.

Neuronify aims to provide a low entry point to simulation-based neuroscience.

Difficulty level: Beginner
Duration: 01:25
Speaker: : Neuronify

This lecture covers the concept of neural nets--rotation and squashing and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:01:53
Speaker: : Alfredo Canziani

This lecture covers the concept of neural nets training (tools, classification with neural nets, and PyTorch implementation) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:05:47
Speaker: : Alfredo Canziani

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:09:12
Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:05:36
Speaker: : Alfredo Canziani

This lecture covers the concept of inference in latent variable energy based models (LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:01:04
Speaker: : Alfredo Canziani

This tutorial covers the concept of training latent variable energy based models (LV-EBMs) and is is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:04:48
Speaker: : Alfredo Canziani

This lecture covers advanced concept of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Beginner
Duration: 56:41
Speaker: : Alfredo Canziani

As a part of NeuroHackademy 2021, Noah Benson gives an introduction to Pytorch, one of the two most common software packages for deep learning applications to the neurosciences.

Difficulty level: Beginner
Duration: 00:50:40
Speaker: :

In this hands-on tutorial, Dr. Robert Guangyu Yang works through a number of coding exercises to see how RNNs can be easily used to study cognitive neuroscience questions, with a quick demonstration of how we can train and analyze RNNs on various cognitive neuroscience tasks. Familiarity of Python and basic knowledge of Pytorch are assumed.

Difficulty level: Beginner
Duration: 00:26:38
Speaker: :

DAQCORD is a framework for the design, documentation and reporting of data curation methods in order to advance the scientific rigour, reproducibility and analysis of the data. This lecture covers the rationale for developing the framework, the process in which the framework was developed, and ends with a presentation of the framework. While the driving use case for DAQCORD was clinical traumatic brain injury research, the framework is applicable to clinical studies in other domains of clinical neuroscience research.

Difficulty level: Intermediate
Duration: 17:08
Speaker: : Ari Ercole
Difficulty level: Beginner
Duration: 6:10
Speaker: : MATLAB®
Difficulty level: Beginner
Duration: 15:10
Speaker: : MATLAB®
Difficulty level: Beginner
Duration: 2:49
Speaker: : MATLAB®
Difficulty level: Beginner
Duration: 6:10
Speaker: : MATLAB®

This tutorial illustrates several ways to approach predictive modeling and machine learning with MATLAB.

Difficulty level: Beginner
Duration: 6:27
Speaker: : MATLAB®