Skip to main content

This video gives a brief introduction to Neuro4ML's lessons on neuromorphic computing - the use of specialized hardware which either directly mimics brain function or is inspired by some aspect of the way the brain computes. 

Difficulty level: Intermediate
Duration: 3:56
Speaker: : Dan Goodman

In this lesson, you will learn in more detail about neuromorphic computing, that is, non-standard computational architectures that mimic some aspect of the way the brain works. 

Difficulty level: Intermediate
Duration: 10:08
Speaker: : Dan Goodman

This video provides a very quick introduction to some of the neuromorphic sensing devices, and how they offer unique, low-power applications.

Difficulty level: Intermediate
Duration: 2:37
Speaker: : Dan Goodman

This lesson presents a simulation software for spatial model neurons and their networks designed primarily for GPUs.

Difficulty level: Intermediate
Duration: 21:15
Speaker: : Tadashi Yamazaki

The lecture covers a brief introduction to neuromorphic engineering, some of the neuromorphic networks that the speaker has developed, and their potential applications, particularly in machine learning.

Difficulty level: Intermediate
Duration: 19:57

This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology. 

Difficulty level: Intermediate
Duration: 33:41

This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs. 

Difficulty level: Intermediate
Duration: 50:18
Speaker: : Jeff Grethe

This lecture focuses on ontologies for clinical neurosciences.

Difficulty level: Intermediate
Duration: 21:54

This lightning talk describes an automated pipline for positron emission tomography (PET) data. 

Difficulty level: Intermediate
Duration: 7:27

This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines. 

Difficulty level: Intermediate
Duration: 1:03:55
Speaker: : Patrik Bey

This lesson is the first of three hands-on tutorials as part of the workshop Research Workflows for Collaborative Neuroscience. This tutorial goes over how to visualize data with Scanpy, a scalable toolkit for analyzing single-cell gene expression. 

Difficulty level: Intermediate
Duration: 25:26

In this third and final hands-on tutorial from the Research Workflows for Collaborative Neuroscience workshop, you will learn about workflow orchestration using open source tools like DataJoint and Flyte. 

Difficulty level: Intermediate
Duration: 22:36
Speaker: : Daniel Xenes

This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.

Difficulty level: Intermediate
Duration: 47:00
Speaker: : Dimitri Yatsenko

This talks discusses data sharing in the context of dementia. It explains why data sharing in dementia is important, how data is usually shared in the field and illustrates two examples: the Netherlands Consortium Dementia cohorts and the European Platform for Neurodegenerative Diseases.

Difficulty level: Intermediate
Duration: 21:21

The Medical Informatics Platform (MIP) Dementia had been installed in several memory clinics across Europe allowing them to federate their real-world databases. Research open access databases had also been integrated such as ADNI (Alzheimer’s Dementia Neuroimaging Initiative), reaching a cumulative case load of more than 5,000 patients (major cognitive disorder due to Alzheimer’s disease, other major cognitive disorder, minor cognitive disorder, controls). The statistic and machine learning tools implemented in the MIP allowed researchers to conduct easily federated analyses among Italian memory clinics (Redolfi et al. 2020) and also across borders between the French (Lille), the Swiss (Lausanne) and the Italian (Brescia) datasets.

Difficulty level: Intermediate
Duration: 16:44
Speaker: : Mélanie Leroy

The number of patients with dementia is estimated to increase given the aging population. This will lead to a number of challenges in the future in terms of diagnosis and care for patients with dementia. To meet these needs such as early diagnsosis and development of prognostic biomarkers, large datasets, such as the federated datasets on dementia. The EAN Dementia and cognitive disorders scientific panel can play an important role as coordinator and connecting panel members who wish to participate in e.g. consortia.

Difficulty level: Intermediate
Duration: 15:39

This lesson describes the fundamentals of genomics, from central dogma to design and implementation of GWAS, to the computation, analysis, and interpretation of polygenic risk scores. 

Difficulty level: Intermediate
Duration: 1:28:16
Speaker: : Dan Felsky

This is a hands-on tutorial on PLINK, the open source whole genome association analysis toolset. The aims of this tutorial are to teach users how to perform basic quality control on genetic datasets, as well as to identify and understand GWAS summary statistics. 

Difficulty level: Intermediate
Duration: 1:27:18
Speaker: : Dan Felsky

This is a tutorial on using the open-source software PRSice to calculate a set of polygenic risk scores (PRS) for a study sample. Users will also learn how to read PRS into R, visualize distributions, and perform basic association analyses. 

Difficulty level: Intermediate
Duration: 1:53:34
Speaker: : Dan Felsky

This lesson is an overview of transcriptomics, from fundamental concepts of the central dogma and RNA sequencing at the single-cell level, to how genetic expression underlies diversity in cell phenotypes. 

Difficulty level: Intermediate
Duration: 1:29:08