This lecture provides an introduction to the course "Cognitive Science & Psychology: Mind, Brain, and Behavior".
This lesson covers the history of neuroscience and machine learning, and the story of how these two seemingly disparate fields are increasingly merging.
In this lesson you will learn how machine learners and neuroscientists construct abstract computational models based on various neurophysiological signalling properties.
In this lesson, you will learn about some typical neuronal models employed by machine learners and computational neuroscientists, meant to imitate the biophysical properties of real neurons.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson goes over some examples of how machine learners and computational neuroscientists go about designing and building neural network models inspired by biological brain systems.
In this lesson, you will learn about different approaches to modeling learning in neural networks, particularly focusing on system parameters such as firing rates and synaptic weights impact a network.
In this lesson, you will learn more about some of the issues inherent in modeling neural spikes, approaches to ameliorate these problems, and the pros and cons of these approaches.
In this lesson, you will learn about some of the many methods to train spiking neural networks (SNNs) with either no attempt to use gradients, or only use gradients in a limited or constrained way.
In this lesson, you will learn how to train spiking neural networks (SNNs) with a surrogate gradient method.
This lesson explores how researchers try to understand neural networks, particularly in the case of observing neural activity.
In this lesson you will learn about the motivation behind manipulating neural activity, and what forms that may take in various experimental designs.
This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.
This lecture focuses on the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information.
This session will include presentations of infrastructure that embrace the FAIR principles developed by members of the INCF Community. This lecture provides an overview and demo of the Canadian Open Neuroscience Platform (CONP).
This lesson gives a description of the BrainHealth Databank, a repository of many types of health-related data, whose aim is to accelerate research, improve care, and to help better understand and diagnose mental illness, as well as develop new treatments and prevention strategies.
This lesson corresponds to slides 46-78 of the PDF below.
This lesson describes not only the need for precision medicine, but also the current state of the methods, pharmacogenetic approaches, utility and implementation of such care today.
This lesson corresponds to slides 1-50 of the PowerPoint below.
This lecture covers the needs and challenges involved in creating a FAIR ecosystem for neuroimaging research.
This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.