Skip to main content

Félix-Antoine Fortin from Calcul Québec gives an introduction to high-performance computing with the Compute Canada network, first providing an overview of use cases for HPC and then a hand-on tutorial.  Though some examples might seem specific to the Calcul Québec, all computing clusters in the Compute Canada network share the same software modules and environments.

 

The lesson was given in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 02:49:34
Speaker: :

Shawn Brown presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance-computing facilities across Canada and around the world.

 

This talk was given in the context of a Ludmer Centre event in 2019.

 

 

Difficulty level: Beginner
Duration: 56:07
Speaker: :

This course will teach you AWS basics right through to advanced cloud computing concepts. There are lots of hands-on exercises using an AWS free tier account to give you practical experience with Amazon Web Services. Visual slides and animations will help you gain a deep understanding of Cloud Computing.

 

This lesson is courtesy of freeCodeCamp.

Difficulty level: Beginner
Duration: 05:27:20
Speaker: :

As a part of NeuroHackademy 2020, Tara Madhyastha (University of Washington), Andrew Crabb (AWS), and Ariel Rokem (University of Washington) give a lecture on Cloud Computing, focusing on Amazon Web Services

 

This video is provided by the University of Washington eScience Institute.

 

Difficulty level: Beginner
Duration: 01:43:59
Speaker: :

This lecture covers advanced concept of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Beginner
Duration: 56:41
Speaker: : Alfredo Canziani

This tutorial covers LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder and a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:00:34
Speaker: : Alfredo Canziani

This tutorial covers the concepts of autoencoders, denoising encoders, and variational autoencoders (VAE) with PyTorch, as well as generative adversarial networks and code. It is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:07:50
Speaker: : Alfredo Canziani

This tutorial covers advanced concept of energy based models. The lecture is a part of the Associative memories modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. 

Difficulty level: Advanced
Duration: 1:12:00
Speaker: : Alfredo Canziani

This tutuorial covers the concept of Graph convolutional networks and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 57:33
Speaker: : Alfredo Canziani

This lecture covers the concept of model predictive control and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:10:22
Speaker: : Alfredo Canziani

This lecture covers the concepts of emulation of kinematics from observations and training a policy. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:01:21
Speaker: : Alfredo Canziani

This lecture covers the concept of predictive policy learning under uncertainty and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:14:44
Speaker: : Alfredo Canziani

This lecture covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-7 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:51:32
Speaker: : Alfredo Canziani

As a part of NeuroHackademy 2021, Noah Benson gives an introduction to Pytorch, one of the two most common software packages for deep learning applications to the neurosciences.

Difficulty level: Beginner
Duration: 00:50:40
Speaker: :

In this hands-on tutorial, Dr. Robert Guangyu Yang works through a number of coding exercises to see how RNNs can be easily used to study cognitive neuroscience questions, with a quick demonstration of how we can train and analyze RNNs on various cognitive neuroscience tasks. Familiarity of Python and basic knowledge of Pytorch are assumed.

Difficulty level: Beginner
Duration: 00:26:38
Speaker: :
Course:

EyeWire is a game to map the brain. Players are challenged to map branches of a neuron from one side of a cube to the other in a 3D puzzle. Players scroll through the cube and reconstruct neurons with the help of an artificial intelligence algorithm developed at Seung Lab in Princeton University. EyeWire gameplay advances neuroscience by helping researchers discover how neurons connect to process visual information. 

Difficulty level: Beginner
Duration: 03:56
Speaker: : EyeWire
Course:

Mozak is a scientific discovery game about neuroscience for citizen scientists and neuroscientists alike. Players to help neuroscientists build models of brain cells and learn more about the brain through their efforts.

Difficulty level: Beginner
Duration: 00:43
Speaker: : Mozak

This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information. We will also learn about physiological phenomena of the brain such as synchronicity that gives rise to brain waves.

Difficulty level: Beginner
Duration: 7:13
Speaker: : Harrison Canning

Tutorial describing the basic search and navigation features of the Allen Mouse Brain Atlas

Difficulty level: Beginner
Duration: 6:40
Speaker: : Unknown

Tutorial describing the basic search and navigation features of the Allen Developing Mouse Brain Atlas

Difficulty level: Beginner
Duration: 6:35
Speaker: : Unknown