Course:

This tutorial demonstrates how to work with neuronal data using MATLAB, including actional potentials and spike counts, orientation tuing curves in visual cortex, and spatial maps of firing rates.

Difficulty level: Intermediate

Duration: 5:17

Speaker: : Mike X. Cohen

Course:

This lesson instructs users on how to import electrophysiological neural data into MATLAB, as well as how to convert spikes to a data matrix.

Difficulty level: Intermediate

Duration: 11:37

Speaker: : Mike X. Cohen

Course:

In this lesson, users will learn how to appropriately sort and bin neural spikes, allowing for the generation of a common and powerful visualization tool in neuroscience, the histogram.

Difficulty level: Intermediate

Duration: 5:31

Speaker: : Mike X. Cohen

Course:

Followers of this lesson will learn how to compute, visualize and quantify the tuning curves of individual neurons.

Difficulty level: Intermediate

Duration: 13:48

Speaker: : Mike X. Cohen

Course:

This lesson demonstrates how to programmatically generate a spatial map of neuronal spike counts using MATLAB.

Difficulty level: Intermediate

Duration: 12:16

Speaker: : Mike X. Cohen

Course:

In this lesson, users are shown how to create a spatial map of neuronal orientation tuning.

Difficulty level: Intermediate

Duration: 13:11

Speaker: : Mike X. Cohen

Course:

This lesson provides an introduction to biologically detailed computational modelling of neural dynamics, including neuron membrane potential simulation and F-I curves.

Difficulty level: Intermediate

Duration: 8:21

Speaker: : Mike X. Cohen

Course:

In this lesson, users learn how to use MATLAB to build an adaptive exponential integrate and fire (AdEx) neuron model.

Difficulty level: Intermediate

Duration: 22:01

Speaker: : Mike X. Cohen

Course:

In this lesson, users learn about the practical differences between MATLAB scripts and functions, as well as how to embed their neuronal simulation into a callable function.

Difficulty level: Intermediate

Duration: 11:20

Speaker: : Mike X. Cohen

Course:

This lesson teaches users how to generate a frequency-current (F-I) curve, which describes the function that relates the net synaptic current (I) flowing into a neuron to its firing rate (F).

Difficulty level: Intermediate

Duration: 20:39

Speaker: : Mike X. Cohen

This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.

This lesson corresponds to slides 1-64 in the PDF below.

Difficulty level: Intermediate

Duration: 1:28:14

Speaker: : Andreea Diaconescu

This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).

This lesson corresponds to slides 65-90 of the PDF below.

Difficulty level: Intermediate

Duration: 1:15:04

Speaker: : Daniel Hauke

Tutorial on how to simulate brain tumor brains with TVB (reproducing publication: Marinazzo et al. 2020 Neuroimage). This tutorial comprises a didactic video, jupyter notebooks, and full data set for the construction of virtual brains from patients and health controls. Authors: Hannelore Aerts, Michael Schirner, Ben Jeurissen, DIrk Van Roost, Eric Achten, Petra Ritter, Daniele Marinazzo

Difficulty level: Intermediate

Duration: 10:01

Speaker: :

Introduction to the Brain Imaging Data Structure (BIDS): a standard for organizing human neuroimaging datasets. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Intermediate

Duration: 56:49

Speaker: : Chris Gorgolewski

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 50:17

Speaker: : Yann LeCun and Alfredo Canziani

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:51:03

Speaker: : Yann LeCun

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:59:47

Speaker: : Yann LeCun and Alfredo Canziani

This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 51:40

Speaker: : Yann LeCun

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:09:12

Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate

Duration: 1:05:36

Speaker: : Alfredo Canziani

- Electroencephalography (EEG) (10)
- (-) Deep learning (10)
- Bayesian networks (2)
- Clinical neuroinformatics (2)
- (-) Standards and Best Practices (1)
- Neuroimaging (18)
- Tools (1)
- Clinical neuroscience (1)
- (-) General neuroscience (6)
- (-) Computational neuroscience (5)
- Statistics (3)
- Computer Science (1)
- Genomics (8)
- Data science (2)
- Open science (4)