This lecture covers different perspectives on the study of the mental, focusing on the difference between Mind and Brain.
This lesson briefly goes over the outline of the Neuroscience for Machine Learners course.
This lesson provides a brief overview of the Python programming language, with an emphasis on tools relevant to data scientists.
This tutorial covers the fundamentals of collaborating with Git and GitHub.
This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health.
This tutorial introduces pipelines and methods to compute brain connectomes from fMRI data. With corresponding code and repositories, participants can follow along and learn how to programmatically preprocess, curate, and analyze functional and structural brain data to produce connectivity matrices.
This lesson introduces the practical exercises which accompany the previous lessons on animal and human connectomes in the brain and nervous system.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lecture presents an overview of functional brain parcellations, as well as a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation.
This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.
This lesson gives an introductory presentation on how data science can help with scientific reproducibility.
This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.
This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture discusses the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture is a foundationational lecture for the concept of energy-based models with a particular focus on the joint embedding method and latent variable energy-based models (LV-EBMs) and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of inference in latent variable energy based models (LV-EBMs) and is a part of the Deep Learning Course at NYU's Center for Data Science.