Course:

This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate

Duration: 50:44

Speaker: : Caterina Gratton

Course:

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced

Duration: 50:28

Speaker: : Pierre Bellec

Course:

Introduction to the central concepts of machine learning, and how they can be applied in Python using the Scikit-learn Package. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate

Duration: 2:22:28

Speaker: : Jake Vanderplas

Much like neuroinformatics, data science uses techniques from computational science to derive meaningful results from large complex datasets. In this session, we will explore the relationship between neuroinformatics and data science, by emphasizing a range of data science approaches and activities, ranging from the development and application of statistical methods, through the establishment of communities and platforms, and through the implementation of open-source software tools. Rather than rigid distinctions, in the data science of neuroinformatics, these activities and approaches intersect and interact in dynamic ways. Together with a panel of cutting-edge neuro-data-scientist speakers, we will explore these dynamics

This lecture covers self-supervision as it relates to neural data tasks and the Mine Your Own vieW (MYOW) approach.

Difficulty level: Beginner

Duration: 25:50

Speaker: : Eva Dyer

Course:

Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner

Duration: 01:22:18

Speaker: :

Course:

Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner

Duration: 02:13:53

Speaker: :

Course:

Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner

Duration: 01:17:14

Speaker: :

Course:

This lesson from freeCodeCamp introduces Scikit-learn, the most widely used machine learning Python library.

Difficulty level: Beginner

Duration: 02:09:22

Speaker: :

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 50:17

Speaker: : Yann LeCun and Alfredo Canziani

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:51:03

Speaker: : Yann LeCun

This lecture covers the concept of neural nets--rotation and squashing and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:01:53

Speaker: : Alfredo Canziani

This lecture covers the concept of neural nets training (tools, classification with neural nets, and PyTorch implementation) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:05:47

Speaker: : Alfredo Canziani

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:59:47

Speaker: : Yann LeCun and Alfredo Canziani

This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 51:40

Speaker: : Yann LeCun

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:09:12

Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:05:36

Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:51:30

Speaker: : Yann LeCun

This lecture covers the concept of inference in latent variable energy based models (LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:01:04

Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:48:53

Speaker: : Yann LeCun

This tutorial covers the concept of training latent variable energy based models (LV-EBMs) and is is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:04:48

Speaker: : Alfredo Canziani

- FAIR (28)
- Cognitive neuroinformatics (1)
- Version control (1)
- Electroencephalography (EEG) (19)
- Connectomics (3)
- Addgene (1)
- Data management (7)
- Digital brain atlasing (4)
- Programming Languages (13)
- Magnetoencephalography (MEG) (12)
- Brain computer interface (12)
- Standards and Best Practices (2)
- Epilepsy (2)
- Data analysis (3)
- Databases (6)
- (-) Connectivity (2)
- (-) Deep learning (31)
- Neuroimaging (15)
- Deep brain stiumlation (1)
- Event related potential (ERP) (17)
- Brain networks (3)
- Data structures/models (1)
- Cloud computing (4)
- Glia (1)
- Psychiatric disorders (1)
- Ontologies (1)
- (-) Machine learning (6)
- Reproducibility (1)
- High performance computing (3)
- Electrophysiology (7)
- Stroke (1)
- Standards and best practices (2)
- Tools (17)
- Neuroanatomy (6)
- Neurobiology (4)
- Workflows (3)
- (-) Neurodegeneration (1)
- Neuroimmunology (1)
- Neuropharmacology (1)
- Animal models (1)
- Assembly 2021 (27)
- BIDS (3)
- Brain-hardware interfaces (12)
- Canadian Open Neuroscience Platform (1)
- Clinical neuroscience (8)
- CONP (1)
- DANDI archive (1)
- Data curation (2)
- Data governance (2)
- Data integration (1)
- Data reuse (3)
- Data sharing (6)
- Data standard (1)
- International Brain Initiative (2)
- Metadata (2)
- Neural networks (2)
- Neurodata Without Borders (2)
- NIDM (1)
- PET (2)
- QEEG (1)
- Repository (1)
- Resources (1)
- Simulation (1)
- General neuroscience (4)
- Phenome (1)
- Clinical neuroinformatics (4)
- Computational neuroscience (92)
- Computer Science (4)
- Genomics (23)
- Data science (14)
- Open science (16)
- Project management (3)
- Education (2)
- Neuroethics (5)