Skip to main content

Machine learning

By
Level
Beginner

These courses begin with the conceptual basics of machine learning and then move on to some Python-based applications of popular supervised learning algorithms to neuroscience data.  This is followed by a series of lectures that explore the history and applications of deep learning, ending with a presentation on the potential of deep learning for neuroscience applications/mis-applications.

Lessons of this Course
1
1
Duration:
01:22:18

Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.

 

The lesson was presented in the context of the BrainHack School 2020.

2
2
Duration:
02:13:53

Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.

 

The lesson was presented in the context of the BrainHack School 2020.

3
3
Duration:
01:17:14

Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.

 

The lesson was presented in the context of the BrainHack School 2020.

4
3
Duration:
50:17

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

5
4
Duration:
1:51:03
Speaker:

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

6
5
Duration:
1:01:53

This lecture covers the concept of neural nets--rotation and squashing and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

7
6
Duration:
1:42:26

This lecture on modules and architectures is part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

8
7
Duration:
1:05:47

This lecture covers the concept of neural nets training (tools, classification with neural nets, and PyTorch implementation) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

9
8
Duration:
01:35:12

 

Blake Richards gives an introduction to deep learning, with a perspective via inductive biases and emphasis on correctly matching deep learning to the right research questions.

 

The lesson was presented in the context of the BrainHack School 2020.

10
10
Duration:
00:51:12

Dr. Guangyu Robert Yang describes how Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In comparison to traditional computational models in neuroscience, RNNs can offer substantial advantages at explaining complex behavior and neural activity patterns. Their use allows rapid generation of mechanistic hypotheses for cognitive computations. RNNs further provide a natural way to flexibly combine bottom-up biological knowledge with top-down computational goals into network models. However, early works of this approach are faced with fundamental challenges. In this talk, Dr. Guangyu Robert Yang discusses some of these challenges, and several recent steps that we took to partly address them and to build next-generation RNN models for cognitive neuroscience.​