Skip to main content
Course:

The Mouse Phenome Database (MPD) provides access to primary experimental trait data, genotypic variation, protocols and analysis tools for mouse genetic studies. Data are contributed by investigators worldwide and represent a broad scope of phenotyping endpoints and disease-related traits in naïve mice and those exposed to drugs, environmental agents or other treatments. MPD ensures rigorous curation of phenotype data and supporting documentation using relevant ontologies and controlled vocabularies. As a repository of curated and integrated data, MPD provides a means to access/re-use baseline data, as well as allows users to identify sensitized backgrounds for making new mouse models with genome editing technologies, analyze trait co-inheritance, benchmark assays in their own laboratories, and many other research applications. MPD’s primary source of funding is NIDA. For this reason, a majority of MPD data is neuro- and behavior-related.

Difficulty level: Beginner
Duration: 55:36
Speaker: : Elissa Chesler

This lesson provides a demonstration of GeneWeaver, a system for the integration and analysis of heterogeneous functional genomics data.

Difficulty level: Beginner
Duration: 25:53
Speaker: :

This lecture on model types introduces the advantages of modeling, provide examples of different model types, and explain what modeling is all about. 

Difficulty level: Beginner
Duration: 27:48
Speaker: : Gunnar Blohm

This lecture focuses on how to get from a scientific question to a model using concrete examples. We will present a 10-step practical guide on how to succeed in modeling. This lecture contains links to 2 tutorials, lecture/tutorial slides, suggested reading list, and 3 recorded Q&A sessions.

Difficulty level: Beginner
Duration: 29:52
Speaker: : Megan Peters

This lecture focuses on the purpose of model fitting, approaches to model fitting, model fitting for linear models, and how to assess the quality and compare model fits. We will present a 10-step practical guide on how to succeed in modeling. 

Difficulty level: Beginner
Duration: 26:46
Speaker: : Jan Drugowitsch

This lecture provides an overview of the generalized linear models (GLM) course, originally a part of the Neuromatch Academy (NMA), an interactive online summer school held in 2020. NMA provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience". 

Difficulty level: Beginner
Duration: 33:58
Speaker: : Cristina Savin

This lecture introduces the core concepts of dimensionality reduction.

Difficulty level: Beginner
Duration: 31:43
Speaker: : Byron Yu

This is the first of a series of tutorials on fitting models to data. In this tutorial, we start with simple linear regression, using least squares optimization.

Difficulty level: Beginner
Duration: 6:18
Speaker: : Anqi Wu

In this tutorial, we will use a different approach to fit linear models that incorporates the random 'noise' in our data.

Difficulty level: Beginner
Duration: 8:00
Speaker: : Anqi Wu

This tutorial discusses how to gauge how good our estimated model parameters are.

Difficulty level: Beginner
Duration: 5:00
Speaker: : Anqi Wu

In this tutorial, we will generalize the regression model to incorporate multiple features.

Difficulty level: Beginner
Duration: 7:50
Speaker: : Anqi Wu

This tutorial teaches users about the bias-variance tradeoff and see it in action using polynomial regression models.

Difficulty level: Beginner
Duration: 6:38
Speaker: : Anqi Wu

This tutorial covers how to select an appropriate model based on cross-validation methods. 

Difficulty level: Beginner
Duration: 5:28
Speaker: : Anqi Wu

This is a tutorial covering Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). The data you will be using was published by Uzzell & Chichilnisky 2004.

Difficulty level: Beginner
Duration: 8:09
Speaker: : Anqi Wu

This tutorial covers the implementation of logistic regression, a special case of GLMs used to model binary outcomes. In this tutorial, we will decode a mouse's left/right decisions from spike train data.

Difficulty level: Beginner
Duration: 6:42
Speaker: : Anqi Wu

This tutorial covers multivariate data can be represented in different orthonormal bases.

Difficulty level: Beginner
Duration: 4:48
Speaker: : Alex Cayco Gajic

This tutorial covers how to perform principal component analysis (PCA) by projecting the data onto the eigenvectors of its covariance matrix.

To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this short video (4 minutes) for a geometrical explanation. For a deeper understanding, this in-depth video (17 minutes) provides an excellent basis and is beautifully illustrated.

Difficulty level: Beginner
Duration: 6:33
Speaker: : Alex Cayco Gajic

This tutorial covers how to apply principal component analysis (PCA) for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising.

You can learn more about MNIST dataset here.

Difficulty level: Beginner
Duration: 5:35
Speaker: : Alex Cayco Gajic

This tutorial covers how dimensionality reduction can be useful for visualizing and inferring structure in your data. To do this, we will compare principal component analysis (PCA) with t-SNE, a nonlinear dimensionality reduction method.

Difficulty level: Beginner
Duration: 4:17
Speaker: : Alex Cayco Gajic

This tutorial provides an introduction to Bayesian statistics and covers developing a Bayesian model for localizing sounds based on audio and visual cues. This model will combine prior information about where sounds generally originate with sensory information about the likelihood that a specific sound came from a particular location. The resulting posterior distribution not only allows us to make optimal decision about the sound's origin, but also lets us quantify how uncertain that decision is. Bayesian techniques are therefore useful normative models: the behavior of human or animal subjects can be compared against these models to determine how efficiently they make use of information.

Difficulty level: Beginner
Duration: 5:13