This lecture covers modeling the neuron in silicon, modeling vision and audition, and sensory fusion using a deep network.
This lesson gives an overview of past and present neurocomputing approaches and hybrid analog/digital circuits that directly emulate the properties of neurons and synapses.
Presentation of the Brian neural simulator, where models are defined directly by their mathematical equations and code is automatically generated for each specific target.
This lecture covers advanced concept of energy based models. The lecture is a part of the Advanced energy based models modules of the the Deep Learning Course at NYU's Center for Data Science. Prerequisites for this course include: Energy-Based Models I, Energy-Based Models II, Energy-Based Models III, and an Introduction to Data Science or a Graduate Level Machine Learning course.
This lesson gives an introduction to deep learning, with a perspective via inductive biases and emphasis on correctly matching deep learning to the right research questions.
As a part of NeuroHackademy 2021, Noah Benson gives an introduction to Pytorch, one of the two most common software packages for deep learning applications to the neurosciences.
Learn how to use TensorFlow 2.0 in this full tutorial for beginners. This course is designed for Python programmers looking to enhance their knowledge and skills in machine learning and artificial intelligence.
Throughout the 8 modules in this course you will learn about fundamental concepts and methods in ML & AI like core learning algorithms, deep learning with neural networks, computer vision with convolutional neural networks, natural language processing with recurrent neural networks, and reinforcement learning.
In this hands-on tutorial, Dr. Robert Guangyu Yang works through a number of coding exercises to see how RNNs can be easily used to study cognitive neuroscience questions, with a quick demonstration of how we can train and analyze RNNs on various cognitive neuroscience tasks. Familiarity of Python and basic knowledge of Pytorch are assumed.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
This lesson is part 1 of 2 of a tutorial on statistical models for neural data.
What is the difference between attention and consciousness? This lecture describes the scientific meaning of consciousness, journeys on the search for neural correlates of visual consciousness, and explores the possibility of consciousness in other beings and even non-biological structures.
The "connectome" is a term, coined in the past decade, that has been used to describe more than one phenomenon in neuroscience. This lecture explains the basics of structural connections at the micro-, meso- and macroscopic scales.
EyeWire is a game to map the brain. Players are challenged to map branches of a neuron from one side of a cube to the other in a 3D puzzle. Players scroll through the cube and reconstruct neurons with the help of an artificial intelligence algorithm developed at Seung Lab in Princeton University. EyeWire gameplay advances neuroscience by helping researchers discover how neurons connect to process visual information.
This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information.
This lesson discusses both state-of-the-art detection and prevention schema in working with neurodegenerative diseases.