This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This lecture provides an introduction to Plato’s concept of rationality and Aristotle’s concept of empiricism, and the enduring discussion between rationalism and empiricism to this day.
This lecture goes into further detail about the hard problem of developing a scientific discipline for subjective consciousness.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
Introduction to simple spiking neuron models.
Introduction to simple spiking neuron models.
Audio slides presentation to accompany the paper titled: An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. Authors: M. Schirner, S. Rothmeier, V. Jirsa, A.R. McIntosh, P. Ritter.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture on model types introduces the advantages of modeling, provide examples of different model types, and explain what modeling is all about. This lecture contains links to 3 tutorials, lecture/tutorial slides, suggested reading list, and 3 recorded question and answer sessions.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture focuses on how to get from a scientific question to a model using concrete examples. We will present a 10-step practical guide on how to succeed in modeling. This lecture contains links to 2 tutorials, lecture/tutorial slides, suggested reading list, and 3 recorded question and answer sessions.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture formalizes modeling as a decision process that is constrained by a precise problem statement and specific model goals. We provide real-life examples on how model building is usually less linear than presented in Modeling Practice I.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture focuses on the purpose of model fitting, approaches to model fitting, model fitting for linear models, and how to assess the quality and compare model fits. We will present a 10-step practical guide on how to succeed in modeling.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture summarizes the concepts introduced in Model Fitting I and adds two additional concepts: 1) MLE is a frequentist way of looking at the data and the model, with its own limitations. 2) Side-by-side comparisons of bootstrapping and cross-validation.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture provides an overview of generalized linear models (GLM) and contains links to 2 tutorials, lecture/tutorial slides, suggested reading list, and 3 recorded question and answer sessions.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture further develops the concepts introduced in Machine Learning I.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture introduces the core concepts of dimensionality reduction.
This lecture is part of the Neuromatch Academy (NMA), a massive, interactive online summer school held in 2020 that provided participants with experiences spanning from hands-on modeling experience to meta-science interpretation skills across just about everything that could reasonably be included in the label "computational neuroscience".
This lecture provides an application of dimensionality reduction applied to multi-dimensional neural recordings using brain-computer interfaces with simultaneous spike recordings.
This tutorial covers Generalized Linear Models (GLMs), which are a fundamental framework for supervised learning. In this tutorial, the objective is to model a retinal ganglion cell spike train by fitting a temporal receptive field: first with a Linear-Gaussian GLM (also known as ordinary least-squares regression model) and then with a Poisson GLM (aka "Linear-Nonlinear-Poisson" model). This tutorial also covers a special case of GLMs, logistic regression, and learn how to ensure good model performance. This tutorial is designed to run with retinal ganglion cell spike train data from Uzzell & Chichilnisky 2004.
This tutorial covers multivariate data can be represented in different orthonormal bases.
Overview of this tutorial:
This tutorial covers how to perform principal component analysis (PCA) by projecting the data onto the eigenvectors of its covariance matrix.
Overview of this tutorial:
To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this short video (4 minutes) for a geometrical explanation. For a deeper understanding, this in-depth video (17 minutes) provides an excellent basis and is beautifully illustrated.