Skip to main content

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced
Duration: 50:28
Speaker: : Pierre Bellec
Course:

Neuronify is an educational tool meant to create intuition for how neurons and neural networks behave. You can use it to combine neurons with different connections, just like the ones we have in our brain, and explore how changes on single cells lead to behavioral changes in important networks. Neuronify is based on an integrate-and-fire model of neurons. This is one of the simplest models of neurons that exist. It focuses on the spike timing of a neuron and ignores the details of the action potential dynamics. These neurons are modeled as simple RC circuits. When the membrane potential is above a certain threshold, a spike is generated and the voltage is reset to its resting potential. This spike then signals other neurons through its synapses.

Neuronify aims to provide a low entry point to simulation-based neuroscience.

Difficulty level: Beginner
Duration: 01:25
Speaker: : Neuronify

This lecture covers the linking neuronal activity to behavior using AI-based online detection. 

Difficulty level: Beginner
Duration: 30:39

Much like neuroinformatics, data science uses techniques from computational science to derive meaningful results from large complex datasets. In this session, we will explore the relationship between neuroinformatics and data science, by emphasizing a range of data science approaches and activities, ranging from the development and application of statistical methods, through the establishment of communities and platforms, and through the implementation of open-source software tools. Rather than rigid distinctions, in the data science of neuroinformatics, these activities and approaches intersect and interact in dynamic ways. Together with a panel of cutting-edge neuro-data-scientist speakers, we will explore these dynamics

 

This lecture covers self-supervision as it relates to neural data tasks and the Mine Your Own vieW (MYOW) approach.

Difficulty level: Beginner
Duration: 25:50
Speaker: : Eva Dyer

Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:22:18
Speaker: :

Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 02:13:53
Speaker: :

Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:17:14
Speaker: :

This lesson from freeCodeCamp introduces Scikit-learn, the most widely used machine learning Python library.

Difficulty level: Beginner
Duration: 02:09:22
Speaker: :

Dr. Guangyu Robert Yang describes how Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In comparison to traditional computational models in neuroscience, RNNs can offer substantial advantages at explaining complex behavior and neural activity patterns. Their use allows rapid generation of mechanistic hypotheses for cognitive computations. RNNs further provide a natural way to flexibly combine bottom-up biological knowledge with top-down computational goals into network models. However, early works of this approach are faced with fundamental challenges. In this talk, Dr. Guangyu Robert Yang discusses some of these challenges, and several recent steps that we took to partly address them and to build next-generation RNN models for cognitive neuroscience.​

Difficulty level: Beginner
Duration: 00:51:12
Speaker: :

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models I, Energy based models II, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:54:22
Speaker: : Yann LeCun

This lecture covers advanced concept of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Beginner
Duration: 56:41
Speaker: : Alfredo Canziani

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:54:43
Speaker: : Yann LeCun

This tutorial covers LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder and a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:00:34
Speaker: : Alfredo Canziani

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:28
Speaker: : Yann LeCun

This tutorial covers the concepts of autoencoders, denoising encoders, and variational autoencoders (VAE) with PyTorch, as well as generative adversarial networks and code. It is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:07:50
Speaker: : Alfredo Canziani

This lecture covers advanced concepts of energy based models. The lecture is a part of the Associative memories modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:28
Speaker: : Yann LeCun

This tutorial covers advanced concept of energy based models. The lecture is a part of the Associative memories modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. 

Difficulty level: Advanced
Duration: 1:12:00
Speaker: : Alfredo Canziani

This lecture provides an introduction to the problem of speech recognition using neural models, emphasizing the CTC loss for training and inference when input and output sequences are of different lengths. It also covers the concept of beam search for use during inference, and how that procedure may be modeled at training time using a Graph Transformer Network. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:55:03
Speaker: : Awni Hannun

This lecture covers the concepts of the architecture and convolution of traditional convolutional neural networks, the characteristics of graph and graph convolution, and spectral graph convolutional neural networks and how to perform spectral convolution, as well as the complete spectrum of Graph Convolutional Networks (GCNs), starting with the implementation of Spectral Convolution through Spectral Networks. It then provides insights on applicability of the other convolutional definition of Template Matching to graphs, leading to Spatial networks. This lecture is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:22
Speaker: : Xavier Bresson

This tutuorial covers the concept of Graph convolutional networks and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 57:33
Speaker: : Alfredo Canziani