This lecture further develops the concepts introduced in Machine Learning I. This lecture is part of the Neuromatch Academy (NMA), an interactive online computational neuroscience summer school held in 2020.

Difficulty level: Beginner

Duration: 29:30

Speaker: : I. Memming Park

This lesson provides an overview of the process of developing the TVB-NEST co-simulation on the EBRAINS infrastructure, and its use cases.

Difficulty level: Beginner

Duration: 25:14

Speaker: : Denis Perdikis

Course:

This lecture introduces the core concepts of dimensionality reduction.

Difficulty level: Beginner

Duration: 31:43

Speaker: : Byron Yu

Course:

This lecture covers the application of dimensionality reduction applied to multi-dimensional neural recordings using brain-computer interfaces with simultaneous spike recordings.

Difficulty level: Beginner

Duration: 30:15

Speaker: : Byron Yu

Course:

This is the first of a series of tutorials on fitting models to data. In this tutorial, we start with simple linear regression, using least squares optimization.

Difficulty level: Beginner

Duration: 6:18

Speaker: : Anqi Wu

Course:

In this tutorial, we will use a different approach to fit linear models that incorporates the random 'noise' in our data.

Difficulty level: Beginner

Duration: 8:00

Speaker: : Anqi Wu

Course:

This tutorial discusses how to gauge how good our estimated model parameters are.

Difficulty level: Beginner

Duration: 5:00

Speaker: : Anqi Wu

Course:

In this tutorial, we will generalize the regression model to incorporate multiple features.

Difficulty level: Beginner

Duration: 7:50

Speaker: : Anqi Wu

Course:

This tutorial teaches users about the bias-variance tradeoff and see it in action using polynomial regression models.

Difficulty level: Beginner

Duration: 6:38

Speaker: : Anqi Wu

Course:

This tutorial covers how to select an appropriate model based on cross-validation methods.

Difficulty level: Beginner

Duration: 5:28

Speaker: : Anqi Wu

This is a tutorial covering Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). The data you will be using was published by Uzzell & Chichilnisky 2004.

Difficulty level: Beginner

Duration: 8:09

Speaker: : Anqi Wu

This tutorial covers the implementation of logistic regression, a special case of GLMs used to model binary outcomes. In this tutorial, we will decode a mouse's left/right decisions from spike train data.

Difficulty level: Beginner

Duration: 6:42

Speaker: : Anqi Wu

Course:

This tutorial covers multivariate data can be represented in different orthonormal bases.

Difficulty level: Beginner

Duration: 4:48

Speaker: : Alex Cayco Gajic

Course:

This tutorial covers how to perform principal component analysis (PCA) by projecting the data onto the eigenvectors of its covariance matrix.

To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this short video (4 minutes) for a geometrical explanation. For a deeper understanding, this in-depth video (17 minutes) provides an excellent basis and is beautifully illustrated.

Difficulty level: Beginner

Duration: 6:33

Speaker: : Alex Cayco Gajic

Course:

This tutorial covers how to apply principal component analysis (PCA) for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.

You can learn more about MNIST dataset here.

Difficulty level: Beginner

Duration: 5:35

Speaker: : Alex Cayco Gajic

Course:

This tutorial covers how dimensionality reduction can be useful for visualizing and inferring structure in your data. To do this, we will compare principal component analysis (PCA) with t-SNE, a nonlinear dimensionality reduction method.

Difficulty level: Beginner

Duration: 4:17

Speaker: : Alex Cayco Gajic

Course:

This lecture introduces the concept of Bayesian statistics and explains why Bayesian statistics are relevant to studying the brain.

Difficulty level: Beginner

Duration: 31:38

Speaker: : Paul Schrater

Course:

This tutorial provides an introduction to Bayesian statistics and covers developing a Bayesian model for localizing sounds based on audio and visual cues. This model will combine **prior** information about where sounds generally originate with sensory information about the **likelihood** that a specific sound came from a particular location. The resulting **posterior distribution** not only allows us to make optimal decision about the sound's origin, but also lets us quantify how uncertain that decision is. Bayesian techniques are therefore useful **normative models**: the behavior of human or animal subjects can be compared against these models to determine how efficiently they make use of information.

Difficulty level: Beginner

Duration: 5:13

Speaker: : Konrad Paul Kording

Course:

In this tutorial, we will use the concepts introduced in Tutorial 1 as building blocks to explore more complicated sensory integration and ventriloquism!

Difficulty level: Beginner

Duration: 4:22

Speaker: : Konrad Paul Kording

Course:

This tutorial covers computing all the necessary steps to perform model inversion (estimate the model parameters such as 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 that generated data similar to that of a participant). We will describe all the steps of the generative model first, and in the last exercise we will use all these steps to estimate the parameter 𝑝𝑐𝑜𝑚𝑚𝑜𝑛 of a single participant using simulated data. The generative model will be the same Bayesian model we have been using throughout tutorial 2: a mixture of Gaussian prior (common + independent priors) and a Gaussian likelihood.

Difficulty level: Beginner

Duration: 2:40

Speaker: : Konrad Paul Kording

- Artificial Intelligence (5)
- Philosophy of Science (5)
- Notebooks (2)
- protein-protein interactions (1)
- Extracellular signaling (1)
- Animal models (2)
- Assembly 2021 (27)
- Brain-hardware interfaces (12)
- Clinical neuroscience (3)
- International Brain Initiative (2)
- Repositories and science gateways (6)
- Resources (5)
- General neuroscience
(9)
- General neuroinformatics
(4)
- (-) Computational neuroscience (67)
- Statistics (1)
- Computer Science (5)
- Genomics (1)
- Data science
(9)
- Open science (11)
- Project management (6)
- Education (1)
- Neuroethics (25)