Skip to main content

An overview of some of the essential concepts in neuropharmacology (e.g. receptor binding, agonism, antagonism), an introduction to pharmacodynamics and pharmacokinetics, and an overview of the drug discovery process relative to diseases of the Central Nervous System.

Difficulty level: Beginner
Duration: 45:47

Introduction to the types of glial cells, homeostasis (influence of cerebral blood flow and influence on neurons), insulation and protection of axons (myelin sheath; nodes of Ranvier), microglia and reactions of the CNS to injury.

Difficulty level: Beginner
Duration: 40:32

The tutorial is intended primarily for beginners, but it will also beneficial to experimentalists who understand electroencephalography and event related techniques, but need additional knowledge in annotation, standardization, long-term storage and publication of data.

Difficulty level: Beginner
Duration: 35:30

Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Advanced
Duration: 50:28
Speaker: : Pierre Bellec

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go?

 

This lecture covers FAIR atlases, from their background, their construction, and how they can be created in line with the FAIR principles.

Difficulty level: Beginner
Duration: 14:24
Speaker: : Heidi Kleven

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go?

 

This lecture covers multiple aspects of FAIR neuroscience data: what makes it unique, the challenges to making it FAIR, the importance of overcoming these challenges, and how data governance comes into play.

Difficulty level: Beginner
Duration: 14:56
Speaker: : Damian Eke

Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles.

 

This lecture provides guidance on the ethical considerations the clinical neuroimaging community faces when applying the FAIR principles to their research. This lecture was part of the FAIR approaches for neuroimaging research session at the 2020 INCF Assembly.

Difficulty level: Beginner
Duration: 13:11
Speaker: : Gustav Nilsonne

The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives.

 

This lecture contains an overview of the Australian Electrophysiology Data Analytics Platform (AEDAPT), how it works, how to scale it, and how it fits into the FAIR ecosystem.

Difficulty level: Beginner
Duration: 18:56
Speaker: : Tom Johnstone

This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 50:17

This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:03
Speaker: : Yann LeCun

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:59:47

This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 51:40
Speaker: : Yann LeCun

This lecture covers the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:09:12
Speaker: : Alfredo Canziani

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:05:36
Speaker: : Alfredo Canziani

This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Intermediate
Duration: 1:51:30
Speaker: : Yann LeCun