This lesson provides a brief overview of the Python programming language, with an emphasis on tools relevant to data scientists.
This lecture presents an overview of functional brain parcellations, as well as a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation.
The lecture provides an overview of the core skills and practical solutions required to practice reproducible research.
This lecture on model types introduces the advantages of modeling, provide examples of different model types, and explain what modeling is all about.
This lecture focuses on how to get from a scientific question to a model using concrete examples. We will present a 10-step practical guide on how to succeed in modeling. This lecture contains links to 2 tutorials, lecture/tutorial slides, suggested reading list, and 3 recorded Q&A sessions.
This lecture formalizes modeling as a decision process that is constrained by a precise problem statement and specific model goals. We provide real-life examples on how model building is usually less linear than presented in Modeling Practice I.
This lecture focuses on the purpose of model fitting, approaches to model fitting, model fitting for linear models, and how to assess the quality and compare model fits. We will present a 10-step practical guide on how to succeed in modeling.
This lecture summarizes the concepts introduced in Model Fitting I and adds two additional concepts: 1) MLE is a frequentist way of looking at the data and the model, with its own limitations. 2) Side-by-side comparisons of bootstrapping and cross-validation.
This lecture provides an overview of the generalized linear models (GLM) course, originally a part of the Neuromatch Academy (NMA), an interactive online summer school held in 2020. NMA provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture further develops the concepts introduced in Machine Learning I. This lecture is part of the Neuromatch Academy (NMA), an interactive online computational neuroscience summer school held in 2020.
This lecture introduces the core concepts of dimensionality reduction.
This lecture covers the application of dimensionality reduction applied to multi-dimensional neural recordings using brain-computer interfaces with simultaneous spike recordings.
This is a tutorial covering Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). The data you will be using was published by Uzzell & Chichilnisky 2004.
This tutorial covers multivariate data can be represented in different orthonormal bases.
This tutorial covers how to perform principal component analysis (PCA) by projecting the data onto the eigenvectors of its covariance matrix.
To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this short video (4 minutes) for a geometrical explanation. For a deeper understanding, this in-depth video (17 minutes) provides an excellent basis and is beautifully illustrated.
This tutorial covers how to apply principal component analysis (PCA) for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.
You can learn more about MNIST dataset here.
This tutorial covers how dimensionality reduction can be useful for visualizing and inferring structure in your data. To do this, we will compare principal component analysis (PCA) with t-SNE, a nonlinear dimensionality reduction method.
This lecture introduces the concept of Bayesian statistics and explains why Bayesian statistics are relevant to studying the brain.
This tutorial provides an introduction to Bayesian statistics and covers developing a Bayesian model for localizing sounds based on audio and visual cues. This model will combine prior information about where sounds generally originate with sensory information about the likelihood that a specific sound came from a particular location. The resulting posterior distribution not only allows us to make optimal decision about the sound's origin, but also lets us quantify how uncertain that decision is. Bayesian techniques are therefore useful normative models: the behavior of human or animal subjects can be compared against these models to determine how efficiently they make use of information.
In this tutorial, we will use the concepts introduced in Tutorial 1 as building blocks to explore more complicated sensory integration and ventriloquism!