Skip to main content

This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health. 

Difficulty level: Intermediate
Duration: 1:47:22

This lecture provides an introduction to the Brain Imaging Data Structure (BIDS), a standard for organizing human neuroimaging datasets.

Difficulty level: Intermediate
Duration: 56:49
Course:

 

Panel discussion by leading scientists, engineers and philosophers discuss what brain-computer interfaces are and the unique scientific and ethical challenges they pose. hosted by Lynne Malcolm from ABC Radio National's All in the Mind program and features:

  • Dr Hannah Maslen, Deputy Director, Oxford Uehiro Centre for Practical Ethics, University of Oxford
  • Prof. Eric Racine, Director, Pragmatic Health Ethics Research Unity, Montreal Institute of Clinical Research
  • Prof Jeffrey Rosenfeld, Director, Monash Institute of Medical Engineering, Monash University
  • Dr Isabell Kiral-Kornek, AI and Life Sciences Researcher, IBM Research
  • A/Prof Adrian Carter, Neuroethics Program Coordinator, ARC Centre of Excellence for Integrative Brain Function

 

Difficulty level: Intermediate
Duration: 1:14:34
Course:

 

Panel of experts discuss the virtues and risks of our digital health data being captured and used by others in the age of Facebook, metadata retention laws, Cambridge Analytica and a rapidly evolving neuroscience. The discussion was moderated by Jon Faine, ABC Radio presenter. The panelists were:

  • Mr Sven Bluemmel, Victorian Information Commissioner
  • Prof Judy Illes, Neuroethics Canada, University of British Columbia, Order of Canada
  • Prof Mark Andrejevic, Professor of Media Studies, Monash University
  • Ms Vrinda Edan, Chief Operating Officer, Victorian Mental Illness Awareness Council


 

 

Difficulty level: Intermediate
Duration: 1:10:30

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 50:17

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:03
Speaker: : Yann LeCun

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:59:47

This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 51:40
Speaker: : Yann LeCun

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:09:12
Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:05:36
Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:30
Speaker: : Yann LeCun

This lecture covers the concept of inference in latent variable energy based models (LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:01:04
Speaker: : Alfredo Canziani

This panel discussion covers how energy based models are used and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 10:42

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:48:53
Speaker: : Yann LeCun

This tutorial covers the concept of training latent variable energy based models (LV-EBMs) and is is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep LearningParameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:04:48
Speaker: : Alfredo Canziani