This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health.
This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
In this presentation by the OHBM OpenScienceSIG, Tom Shaw and Steffen Bollmann cover how containers can be useful for running the same software on different platforms and sharing analysis pipelines with other researchers. They demonstrate how to build docker containers from scratch, using Neurodocker, and cover how to use containers on an HPC with singularity.
This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This lecture on modules and architectures is part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.
This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.
This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.
This lecture is a foundationational lecture for the concept of energy based models with a particular focus on the joint embedding method and latent variable energy based models 8LV-EBMs) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning, Parameter sharing, and Introduction to Data Science or a Graduate Level Machine Learning.
Blake Richards gives an introduction to deep learning, with a perspective via inductive biases and emphasis on correctly matching deep learning to the right research questions.
The lesson was presented in the context of the BrainHack School 2020.
Serving as good refresher, Shawn Grooms explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.
This compilation is courtesy of freeCodeCamp.
Linear algebra is the branch of mathematics concerning linear equations such as linear functions and their representations through matrices and vector spaces. As such, it underlies a huge variety of analyses in the neurosciences. This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.
This lesson was created by RootMath.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This talk gives an overview of the Human Brain Project, a 10-year endeavour putting in place a cutting-edge research infrastructure that will allow scientific and industrial researchers to advance our knowledge in the fields of neuroscience, computing, and brain-related medicine.
This lecture gives an introduction to the European Academy of Neurology, its recent achievements and ambitions.
This lesson contains both a lecture and a tutorial component. The lecture (0:00-20:03 of YouTube video) discusses both the need for intersectional approaches in healthcare as well as the impact of neglecting intersectionality in patient populations. The lecture is followed by a practical tutorial in both Python and R on how to assess intersectional bias in datasets. Links to relevant code and data are found below.