Skip to main content

This is an introductory lecture on whole-brain modelling, delving into the various spatial scales of neuroscience, neural population models, and whole-brain modelling. Additionally, the clinical applications of building and testing such models are characterized. 

Difficulty level: Intermediate
Duration: 1:24:44
Speaker: : John Griffiths

This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.

Difficulty level: Beginner
Duration: 14:24
Speaker: : Heidi Kleven

This lesson discusses the need for and approaches to integrating data across the various temporal and spatial scales in which brain activity can be measured. 

Difficulty level: Beginner
Duration: 1:35:37

This lesson consists of lecture and tutorial components, focusing on resources and tools which facilitate multi-scale brain modeling and simulation. 

Difficulty level: Beginner
Duration: 3:46:21

In this talk, challenges of handling complex neuroscientific data are discussed, as well as tools and services for the annotation, organization, storage, and sharing of these data. 

Difficulty level: Beginner
Duration: 21:49
Speaker: : Thomas Wachtler

This lecture describes the neuroscience data respository G-Node Infrastructure (GIN), which provides platform independent data access and enables easy data publishing. 

Difficulty level: Beginner
Duration: 22:23
Speaker: : Michael Sonntag

This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.

Difficulty level: Intermediate
Duration: 47:00
Speaker: : Dimitri Yatsenko

This lecture discusses how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.

Difficulty level: Beginner
Duration: 13:16
Speaker: : Kelly Shen

This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.

Difficulty level: Beginner
Duration: 15:14

This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment. 

This lesson corresponds to slides 1-64 in the PDF below. 

Difficulty level: Intermediate
Duration: 1:28:14

This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.

Difficulty level: Beginner
Duration: 2:24:35

This lecture explains the concept of federated analysis in the context of medical data, associated challenges. The lecture also presents an example of hospital federations via the Medical Informatics Platform.

Difficulty level: Intermediate
Duration: 19:15
Speaker: : Yannis Ioannidis

This lesson contains both a lecture and a tutorial component. The lecture (0:00-20:03 of YouTube video) discusses both the need for intersectional approaches in healthcare as well as the impact of neglecting intersectionality in patient populations. The lecture is followed by a practical tutorial in both Python and R on how to assess intersectional bias in datasets. Links to relevant code and data are found below. 

Difficulty level: Beginner
Duration: 52:26

This lesson is an overview of transcriptomics, from fundamental concepts of the central dogma and RNA sequencing at the single-cell level, to how genetic expression underlies diversity in cell phenotypes. 

Difficulty level: Intermediate
Duration: 1:29:08

In this lecture, you will learn about current methods, approaches, and challenges to studying human neuroanatomy, particularly through the lense of neuroimaging data such as fMRI and diffusion tensor imaging (DTI). 

Difficulty level: Intermediate
Duration: 1:35:14
Speaker: : Matt Glasser

This lesson provides a thorough description of neuroimaging development over time, both conceptually and technologically. You will learn about the fundamentals of imaging techniques such as MRI and PET, as well as how the resultant data may be used to generate novel data visualization schemas. 

Difficulty level: Beginner
Duration: 1:43:57
Speaker: : Jack Van Horn

This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.

Difficulty level: Beginner
Duration: 54:58
Speaker: : Franco Pestilli

As the previous lesson of this course described how researchers acquire neural data, this lesson will discuss how to go about interpreting and analysing the data. 

Difficulty level: Intermediate
Duration: 9:24
Speaker: : Marcus Ghosh
Course:

This lesson gives a quick walkthrough the Tidyverse, an "opinionated" collection of R packages designed for data science, including the use of readr, dplyr, tidyr, and ggplot2.

Difficulty level: Beginner
Duration: 1:01:39
Speaker: : Thomas Mock

In this session the Medical Informatics Platform (MIP) federated analytics is presented. The current and future analytical tools implemented in the MIP will be detailed along with the constructs, tools, processes, and restrictions that formulate the solution provided. MIP is a platform providing advanced federated analytics for diagnosis and research in clinical neuroscience research. It is targeting clinicians, clinical scientists and clinical data scientists. It is designed to help adopt advanced analytics, explore harmonized medical data of neuroimaging, neurophysiological and medical records as well as research cohort datasets, without transferring original clinical data. It can be perceived as a virtual database that seamlessly presents aggregated data from distributed sources, provides access and analyze imaging and clinical data, securely stored in hospitals, research archives and public databases. It leverages and re-uses decentralized patient data and research cohort datasets, without transferring original data. Integrated statistical analysis tools and machine learning algorithms are exposed over harmonized, federated medical data.

Difficulty level: Intermediate
Duration: 15:05