Skip to main content

Introduction to the Brain Imaging Data Structure (BIDS): a standard for organizing human neuroimaging datasets. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 56:49

This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate
Duration: 50:44
Speaker: : Caterina Gratton

A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Beginner
Duration: 1:16:36
Speaker: : Tal Yarkoni

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 50:17

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:03
Speaker: : Yann LeCun

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:59:47

This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 51:40
Speaker: : Yann LeCun

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:09:12
Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:05:36
Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:30
Speaker: : Yann LeCun

This lecture covers the concept of inference in latent variable energy based models (LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:01:04
Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:48:53
Speaker: : Yann LeCun

This tutorial covers the concept of training latent variable energy based models (LV-EBMs) and is is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:04:48
Speaker: : Alfredo Canziani

This lecture covers advanced concept of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Beginner
Duration: 56:41
Speaker: : Alfredo Canziani

 

Blake Richards gives an introduction to deep learning, with a perspective via inductive biases and emphasis on correctly matching deep learning to the right research questions.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:35:12
Speaker: :

Introduction to neurons, synaptic transmission, and ion channels.

Difficulty level: Beginner
Duration: 46:07

Introduction to the types of glial cells, homeostasis (influence of cerebral blood flow and influence on neurons), insulation and protection of axons (myelin sheath; nodes of Ranvier), microglia and reactions of the CNS to injury.

Difficulty level: Beginner
Duration: 40:32

This lecture covers: integrating information within a network, modulating and controlling networks, functions and dysfunctions of hippocampal networks, and the integrative network controlling sleep and arousal.

Difficulty level: Beginner
Duration: 47:05

This lecture focuses on the comprehension of nociception and pain sensation. It highlights how the somatosensory system and different molecular partners are involved in nociception and how nociception and pain sensation are studied in rodents and humans and the development of pain therapy.

Difficulty level: Beginner
Duration: 28:09
Speaker: : Serena Quarta

How genetics can contribute to our understanding of psychiatric phenotypes.

Difficulty level: Beginner
Duration: 55:15
Speaker: : Sven Cichon