This video gives a brief introduction to Neuro4ML's lessons on neuromorphic computing - the use of specialized hardware which either directly mimics brain function or is inspired by some aspect of the way the brain computes.
In this lesson, you will learn in more detail about neuromorphic computing, that is, non-standard computational architectures that mimic some aspect of the way the brain works.
This video provides a very quick introduction to some of the neuromorphic sensing devices, and how they offer unique, low-power applications.
This lecture covers modeling the neuron in silicon, modeling vision and audition, and sensory fusion using a deep network.
This lesson presents a simulation software for spatial model neurons and their networks designed primarily for GPUs.
This lesson gives an overview of past and present neurocomputing approaches and hybrid analog/digital circuits that directly emulate the properties of neurons and synapses.
Presentation of the Brian neural simulator, where models are defined directly by their mathematical equations and code is automatically generated for each specific target.
The lecture covers a brief introduction to neuromorphic engineering, some of the neuromorphic networks that the speaker has developed, and their potential applications, particularly in machine learning.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This tutorial walks participants through the application of dynamic causal modelling (DCM) to fMRI data using MATLAB. Participants are also shown various forms of DCM, how to generate and specify different models, and how to fit them to simulated neural and BOLD data.
This lesson corresponds to slides 158-187 of the PDF below.
This lesson provides an overview of how to construct computational pipelines for neurophysiological data using DataJoint.
This lesson delves into the the structure of one of the brain's most elemental computational units, the neuron, and how said structure influences computational neural network models.
Following the previous lesson on neuronal structure, this lesson discusses neuronal function, particularly focusing on spike triggering and propogation.
This lesson goes over the basic mechanisms of neural synapses, the space between neurons where signals may be transmitted.
While the previous lesson in the Neuro4ML course dealt with the mechanisms involved in individual synapses, this lesson discusses how synapses and their neurons' firing patterns may change over time.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson covers the ionic basis of the action potential, including the Hodgkin-Huxley model.
This lesson provides an introduction to the myriad forms of cellular mechanisms whicn underpin healthy brain function and communication.
In this lesson you will learn about the ionic basis of the action potential, including the Hodgkin-Huxley model.