Skip to main content

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced
Duration: 50:28
Speaker: : Pierre Bellec

This tutorial covers LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder and a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:00:34
Speaker: : Alfredo Canziani

This tutorial covers the concepts of autoencoders, denoising encoders, and variational autoencoders (VAE) with PyTorch, as well as generative adversarial networks and code. It is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:07:50
Speaker: : Alfredo Canziani

This tutorial covers advanced concept of energy based models. The lecture is a part of the Associative memories modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. 

Difficulty level: Advanced
Duration: 1:12:00
Speaker: : Alfredo Canziani

This tutuorial covers the concept of Graph convolutional networks and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 57:33
Speaker: : Alfredo Canziani

This lecture covers the concept of model predictive control and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:10:22
Speaker: : Alfredo Canziani

This lecture covers the concepts of emulation of kinematics from observations and training a policy. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:01:21
Speaker: : Alfredo Canziani

This lecture covers the concept of predictive policy learning under uncertainty and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:14:44
Speaker: : Alfredo Canziani

This lecture covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-7 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:51:32
Speaker: : Alfredo Canziani

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 5:17
Speaker: : Mike X. Cohen

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 5:31
Speaker: : Mike X. Cohen

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 13:48
Speaker: : Mike X. Cohen

The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.

Difficulty level: Intermediate
Duration: 12:16
Speaker: : Mike X. Cohen

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.

Difficulty level: Intermediate
Duration: 12:16
Speaker: : Mike X. Cohen

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.

Difficulty level: Intermediate
Duration: 13:39
Speaker: : Mike X. Cohen

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.

Difficulty level: Intermediate
Duration: 12:34
Speaker: : Mike X. Cohen

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis

Difficulty level: Intermediate
Duration: 9:10
Speaker: : Mike X. Cohen

 

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.

Difficulty level: Intermediate
Duration: 13:23
Speaker: : Mike X. Cohen

In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.

Difficulty level: Intermediate
Duration: 12:36
Speaker: : Mike X. Cohen

This module introduces computational neuroscience by simulating neurons according to the AdEx model. You will learn about generative modeling, dynamical systems, and FI curves. The MATLAB code introduces Live Scripts and functions.

Difficulty level: Intermediate
Duration: 8:21
Speaker: : Mike X. Cohen