In this tutorial on simulating whole-brain activity using Python, participants can follow along using corresponding code and repositories, learning the basics of neural oscillatory dynamics, evoked responses and EEG signals, ultimately leading to the design of a network model of whole-brain anatomical connectivity.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.
This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology.
This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs.
This lesson briefly goes over the outline of the Neuroscience for Machine Learners course.
This lesson contains practical exercises which accompanies the first few lessons of the Neuroscience for Machine Learners (Neuro4ML) course.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson goes over some examples of how machine learners and computational neuroscientists go about designing and building neural network models inspired by biological brain systems.
This lesson characterizes different types of learning in a neuroscientific and cellular context, and various models employed by researchers to investigate the mechanisms involved.
In this lesson, you will learn about different approaches to modeling learning in neural networks, particularly focusing on system parameters such as firing rates and synaptic weights impact a network.
This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.
This lesson provides an introduction to modeling single neurons, as well as stability analysis of neural models.
This lesson continues a thorough description of the concepts, theories, and methods involved in the modeling of single neurons.
In this lesson you will learn about fundamental neural phenomena such as oscillations and bursting, and the effects these have on cortical networks.
This lesson continues discussing properties of neural oscillations and networks.
In this lecture, you will learn about rules governing coupled oscillators, neural synchrony in networks, and theoretical assumptions underlying current understanding.
This lesson provides a continued discussion and characterization of coupled oscillators.
This lesson gives an overview of modeling neurons based on firing rate.