Skip to main content

This lesson gives an introduction to the Mathematics chapter of Datalabcc's Foundations in Data Science series.

Difficulty level: Beginner
Duration: 2:53
Speaker: : Barton Poulson

This lesson serves a primer on elementary algebra.

Difficulty level: Beginner
Duration: 3:03
Speaker: : Barton Poulson

This lesson provides a primer on linear algebra, aiming to demonstrate how such operations are fundamental to many data science. 

Difficulty level: Beginner
Duration: 5:38
Speaker: : Barton Poulson

In this lesson, users will learn about linear equation systems, as well as follow along some practical use cases.

Difficulty level: Beginner
Duration: 5:24
Speaker: : Barton Poulson

This talk gives a primer on calculus, emphasizing its role in data science.

Difficulty level: Beginner
Duration: 4:17
Speaker: : Barton Poulson

This lesson clarifies how calculus relates to optimization in a data science context. 

Difficulty level: Beginner
Duration: 8:43
Speaker: : Barton Poulson

This lesson covers Big O notation, a mathematical notation that describes the limiting behavior of a function as it tends towards a certain value or infinity, proving useful for data scientists who want to evaluate their algorithms' efficiency.

Difficulty level: Beginner
Duration: 5:19
Speaker: : Barton Poulson

This lesson serves as a primer on the fundamental concepts underlying probability. 

Difficulty level: Beginner
Duration: 7:33
Speaker: : Barton Poulson

Serving as good refresher, this lesson explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.

This compilation is courtesy of freeCodeCamp.

Difficulty level: Beginner
Duration: 1:00:07
Speaker: : Shawn Grooms

This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.

This lesson was created by RootMath.

Difficulty level: Beginner
Duration: 1:21:30
Speaker: :

While the previous lesson in the Neuro4ML course dealt with the mechanisms involved in individual synapses, this lesson discusses how synapses and their neurons' firing patterns may change over time. 

Difficulty level: Intermediate
Duration: 4:48
Speaker: : Marcus Ghosh

In this lesson, you will learn about how machine learners and computational neuroscientists design and build models of neuronal synapses. 

Difficulty level: Intermediate
Duration: 8:59
Speaker: : Dan Goodman

How does the brain learn? This lecture discusses the roles of development and adult plasticity in shaping functional connectivity.

Difficulty level: Beginner
Duration: 1:08:45
Speaker: : Clay Reid

This lesson goes into the mechanisms behind changes in synaptic function created by learning.

Difficulty level: Beginner
Duration: 27:07
Speaker: : Carl Petersen

This lesson contains both a lecture and a tutorial component. The lecture (0:00-20:03 of YouTube video) discusses both the need for intersectional approaches in healthcare as well as the impact of neglecting intersectionality in patient populations. The lecture is followed by a practical tutorial in both Python and R on how to assess intersectional bias in datasets. Links to relevant code and data are found below. 

Difficulty level: Beginner
Duration: 52:26

This is an introductory lecture on whole-brain modelling, delving into the various spatial scales of neuroscience, neural population models, and whole-brain modelling. Additionally, the clinical applications of building and testing such models are characterized. 

Difficulty level: Intermediate
Duration: 1:24:44
Speaker: : John Griffiths

This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).

 

This lesson corresponds to slides 65-90 of the PDF below. 

Difficulty level: Intermediate
Duration: 1:15:04
Speaker: : Daniel Hauke

This lecture discusses what defines an integrative approach regarding research and methods, including various study designs and models which are appropriate choices when attempting to bridge data domains; a necessity when whole-person modelling. 

Difficulty level: Beginner
Duration: 1:28:14
Speaker: : Dan Felsky

Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.

Difficulty level: Intermediate
Duration: 1:21:38
Speaker: : Dan Felsky

In this lesson, you will learn about one particular aspect of decision making: reaction times. In other words, how long does it take to take a decision based on a stream of information arriving continuously over time?

Difficulty level: Intermediate
Duration: 6:01
Speaker: : Dan Goodman