This lecture talks about the usage of knowledge graphs in hospitals and related challenges of semantic interoperability.
This lightning talk describes an automated pipline for positron emission tomography (PET) data.
This lecture covers positron emission tomography (PET) imaging and the Brain Imaging Data Structure (BIDS), and how they work together within the PET-BIDS standard to make neuroscience more open and FAIR.
This module covers many of the types of non-invasive neurotech and neuroimaging devices including electroencephalography (EEG), electromyography (EMG), electroneurography (ENG), magnetoencephalography (MEG), and more.
This lesson gives an introduction to the Mathematics chapter of Datalabcc's Foundations in Data Science series.
This lesson serves a primer on elementary algebra.
This lesson provides a primer on linear algebra, aiming to demonstrate how such operations are fundamental to many data science.
In this lesson, users will learn about linear equation systems, as well as follow along some practical use cases.
This talk gives a primer on calculus, emphasizing its role in data science.
This lesson clarifies how calculus relates to optimization in a data science context.
This lesson covers Big O notation, a mathematical notation that describes the limiting behavior of a function as it tends towards a certain value or infinity, proving useful for data scientists who want to evaluate their algorithms' efficiency.
This lesson serves as a primer on the fundamental concepts underlying probability.
Serving as good refresher, this lesson explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.
This compilation is courtesy of freeCodeCamp.
This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.
This lesson was created by RootMath.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
This lesson is part 1 of 2 of a tutorial on statistical models for neural data.
What is the difference between attention and consciousness? This lecture describes the scientific meaning of consciousness, journeys on the search for neural correlates of visual consciousness, and explores the possibility of consciousness in other beings and even non-biological structures.