Skip to main content

The tutorial is intended primarily for beginners, but it will also beneficial to experimentalists who understand electroencephalography and event related techniques, but need additional knowledge in annotation, standardization, long-term storage and publication of data.

Difficulty level: Beginner
Duration: 35:30

This lecture contains an overview of the Australian Electrophysiology Data Analytics Platform (AEDAPT), how it works, how to scale it, and how it fits into the FAIR ecosystem.

Difficulty level: Beginner
Duration: 18:56
Speaker: : Tom Johnstone

This module covers many of the types of non-invasive neurotech and neuroimaging devices including Electroencephalography (EEG), Electromyography (EMG), Electroneurography (ENG), Magnetoencephalography (MEG), functional Near-Infrared Spectroscopy (fNRIs), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Computed Tomography

Difficulty level: Beginner
Duration: 13:36
Speaker: : Harrison Canning

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced
Duration: 50:28
Speaker: : Pierre Bellec

This lecture covers the linking neuronal activity to behavior using AI-based online detection. 

Difficulty level: Beginner
Duration: 30:39

This lesson gives an in-depth introduction of ethics in the field of artificial intelligence, particularly in the context of its impact on humans and public interest. As the healthcare sector becomes increasingly affected by the implementation of ever stronger AI algorithms, this lecture covers key interests which must be protected going forward, including privacy, consent, human autonomy, inclusiveness, and equity. 

Difficulty level: Beginner
Duration: 1:22:06
Speaker: : Daniel Buchman

This lesson describes a definitional framework for fairness and health equity in the age of the algorithm. While acknowledging the impressive capability of machine learning to positively affect health equity, this talk outlines potential (and actual) pitfalls which come with such powerful tools, ultimately making the case for collaborative, interdisciplinary, and transparent science as a way to operationalize fairness in health equity. 

Difficulty level: Beginner
Duration: 1:06:35
Speaker: : Laura Sikstrom

Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:22:18
Speaker: :

Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 02:13:53
Speaker: :

Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:17:14
Speaker: :

Dr. Guangyu Robert Yang describes how Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In comparison to traditional computational models in neuroscience, RNNs can offer substantial advantages at explaining complex behavior and neural activity patterns. Their use allows rapid generation of mechanistic hypotheses for cognitive computations. RNNs further provide a natural way to flexibly combine bottom-up biological knowledge with top-down computational goals into network models. However, early works of this approach are faced with fundamental challenges. In this talk, Dr. Guangyu Robert Yang discusses some of these challenges, and several recent steps that we took to partly address them and to build next-generation RNN models for cognitive neuroscience.​

Difficulty level: Beginner
Duration: 00:51:12
Speaker: :

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models I, Energy based models II, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:54:22
Speaker: : Yann LeCun

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models II, Energy based models III, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:54:43
Speaker: : Yann LeCun

This lecture covers advanced concepts of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:28
Speaker: : Yann LeCun

This lecture covers advanced concepts of energy based models. The lecture is a part of the Associative memories modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:28
Speaker: : Yann LeCun