Skip to main content

This lecture covers the concepts of the architecture and convolution of traditional convolutional neural networks, the characteristics of graph and graph convolution, and spectral graph convolutional neural networks and how to perform spectral convolution, as well as the complete spectrum of Graph Convolutional Networks (GCNs), starting with the implementation of Spectral Convolution through Spectral Networks. It then provides insights on applicability of the other convolutional definition of Template Matching to graphs, leading to Spatial networks. This lecture is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 2:00:22
Speaker: : Xavier Bresson

This tutuorial covers the concept of Graph convolutional networks and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Modules 1 - 5 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 57:33
Speaker: : Alfredo Canziani

This lecture covers the concept of model predictive control and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:10:22
Speaker: : Alfredo Canziani

This lecture covers the concepts of emulation of kinematics from observations and training a policy. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:01:21
Speaker: : Alfredo Canziani

This lecture covers the concept of predictive policy learning under uncertainty and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-6 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:14:44
Speaker: : Alfredo Canziani

This lecture covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-7 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:29:05
Speaker: : Aaron DeFazio

This lecture covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this module include: Models 1-7 of this course and Introduction to Data Science or a Graduate Level Machine Learning.

Difficulty level: Advanced
Duration: 1:51:32
Speaker: : Alfredo Canziani

 

Blake Richards gives an introduction to deep learning, with a perspective via inductive biases and emphasis on correctly matching deep learning to the right research questions.

 

The lesson was presented in the context of the BrainHack School 2020.

Difficulty level: Beginner
Duration: 01:35:12
Speaker: :

As a part of NeuroHackademy 2021, Noah Benson gives an introduction to Pytorch, one of the two most common software packages for deep learning applications to the neurosciences.

Difficulty level: Beginner
Duration: 00:50:40
Speaker: :

In this hands-on tutorial, Dr. Robert Guangyu Yang works through a number of coding exercises to see how RNNs can be easily used to study cognitive neuroscience questions, with a quick demonstration of how we can train and analyze RNNs on various cognitive neuroscience tasks. Familiarity of Python and basic knowledge of Pytorch are assumed.

Difficulty level: Beginner
Duration: 00:26:38
Speaker: :

This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.

Difficulty level: Beginner
Duration: 37:51

This lecture provides an introduction to the study of eye-tracking in humans. 

Difficulty level: Beginner
Duration: 34:05
Speaker: : Ulrich Ettinger

Computational models provide a framework for integrating data across spatial scales and for exploring hypotheses about the biological mechanisms underlying neuronal and network dynamics. However, as models increase in complexity, additional barriers emerge to the creation, exchange, and re-use of models. Successful projects have created standards for describing complex models in neuroscience and provide open source tools to address these issues. This lecture provides an overview of these projects and make a case for expanded use of resources in support of reproducibility and validation of models against experimental data.

Difficulty level: Beginner
Duration: 1:00:39
Speaker: : Sharon Crook

Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.

Difficulty level: Beginner
Duration: 1:25:17
Speaker: : Fernando Perez

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go? This lecture covers the biomedical researcher's perspective on FAIR data sharing and the importance of finding better ways to manage large datasets.

Difficulty level: Beginner
Duration: 10:51
Speaker: : Adam Ferguson

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible,  Interoperable, and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been the successes?  What is currently very difficult? Where does neuroscience need to go? This lecture covers multiple aspects of FAIR neuroscience data: what makes it unique, the challenges to making it FAIR, the importance of overcoming these challenges, and how data governance comes into play.

Difficulty level: Beginner
Duration: 14:56
Speaker: : Damian Eke

Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the processes, benefits, and challenges involved in designing, collecting, and sharing FAIR neuroscience datasets.

Difficulty level: Beginner
Duration: 11:35

Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the benefits and difficulties involved when re-using open datasets, and how metadata is important to the process.

Difficulty level: Beginner
Duration: 11:20
Speaker: : Elizabeth DuPre

Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience.  FAIR defines a set of high level principles and practices for making digital objects, including data, software and workflows, Findable, Accessible,  Interoperable and Reusable.  But FAIR is not a specification;  it leaves many of the specifics up to individual scientific disciplines to define.  INCF has been leading the way in promoting, defining and implementing FAIR data practices for neuroscience.  We have been bringing together researchers, infrastructure providers, industry and publishers through our programs and networks.  In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience.  We will engage in a discussion on questions such as:  how is neuroscience doing with respect to FAIR?  What have been successes?  What is currently very difficult? Where does neuroscience need to go?

 

This lecture will provide an overview of Addgene, a tool that embraces the FAIR principles developed by members of the INCF Community. This will include an overview of Addgene, their mission, and available resources.

Difficulty level: Beginner
Duration: 12:05
Speaker: : Joanne Kamens

The International Brain Initiative (IBI) is a consortium of the world’s major large-scale brain initiatives and other organizations with a vested interest in catalyzing and advancing neuroscience research through international collaboration and knowledge sharing. This session will introduce the IBI and the current efforts of the Data Standards and Sharing Working Group with a view to gain input from a wider neuroscience and neuroinformatics community

 

This lecture covers the IBI Data Standards and Sharing Working Group, including its history, aims, and projects.

Difficulty level: Beginner
Duration: 3:58
Speaker: : Kenji Doya