In this lesson, you will learn in more detail about neuromorphic computing, that is, non-standard computational architectures that mimic some aspect of the way the brain works.
This video provides a very quick introduction to some of the neuromorphic sensing devices, and how they offer unique, low-power applications.
This lesson is a general overview of overarching concepts in neuroinformatics research, with a particular focus on clinical approaches to defining, measuring, studying, diagnosing, and treating various brain disorders. Also described are the complex, multi-level nature of brain disorders and the data associated with them, from genes and individual cells up to cortical microcircuits and whole-brain network dynamics. Given the heterogeneity of brain disorders and their underlying mechanisms, this lesson lays out a case for multiscale neuroscience data integration.
In this tutorial on simulating whole-brain activity using Python, participants can follow along using corresponding code and repositories, learning the basics of neural oscillatory dynamics, evoked responses and EEG signals, ultimately leading to the design of a network model of whole-brain anatomical connectivity.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson goes over some examples of how machine learners and computational neuroscientists go about designing and building neural network models inspired by biological brain systems.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lesson contains practical exercises which accompanies the first few lessons of the Neuroscience for Machine Learners (Neuro4ML) course.
This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.
This lesson provides an introduction to modeling single neurons, as well as stability analysis of neural models.
This lesson continues a thorough description of the concepts, theories, and methods involved in the modeling of single neurons.
In this lesson you will learn about fundamental neural phenomena such as oscillations and bursting, and the effects these have on cortical networks.
This lesson continues discussing properties of neural oscillations and networks.
In this lecture, you will learn about rules governing coupled oscillators, neural synchrony in networks, and theoretical assumptions underlying current understanding.
This lesson provides a continued discussion and characterization of coupled oscillators.
This lesson gives an overview of modeling neurons based on firing rate.
This lesson characterizes the pattern generation observed in visual system hallucinations.
This lesson gives an introduction to stability analysis of neural models.
This lesson continues from the previous lectures, providing introduction to stability analysis of neural models.