Continuing along the EEGLAB preprocessing pipeline, this tutorial walks users through how to import data events as well as EEG channel locations.
This tutorial demonstrates how to re-reference and resample raw data in EEGLAB, why such steps are important or useful in the preprocessing pipeline, and how choices made at this step may affect subsequent analyses.
In this tutorial, users learn about the various filtering options in EEGLAB, how to inspect channel properties for noisy signals, as well as how to filter out specific components of EEG data (e.g., electrical line noise).
This tutorial instructs users how to visually inspect partially pre-processed neuroimaging data in EEGLAB, specifically how to use the data browser to investigate specific channels, epochs, or events for removable artifacts, biological (e.g., eye blinks, muscle movements, heartbeat) or otherwise (e.g., corrupt channel, line noise).
This tutorial provides instruction on how to use EEGLAB to further preprocess EEG datasets by identifying and discarding bad channels which, if left unaddressed, can corrupt and confound subsequent analysis steps.
Users following this tutorial will learn how to identify and discard bad EEG data segments using the MATLAB toolbox EEGLAB.
This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.
This module covers many of the types of non-invasive neurotech and neuroimaging devices including electroencephalography (EEG), electromyography (EMG), electroneurography (ENG), magnetoencephalography (MEG), and more.
Hierarchical Event Descriptors (HED) fill a major gap in the neuroinformatics standards toolkit, namely the specification of the nature(s) of events and time-limited conditions recorded as having occurred during time series recordings (EEG, MEG, iEEG, fMRI, etc.). Here, the HED Working Group presents an online INCF workshop on the need for, structure of, tools for, and use of HED annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis.
This lesson is a general overview of overarching concepts in neuroinformatics research, with a particular focus on clinical approaches to defining, measuring, studying, diagnosing, and treating various brain disorders. Also described are the complex, multi-level nature of brain disorders and the data associated with them, from genes and individual cells up to cortical microcircuits and whole-brain network dynamics. Given the heterogeneity of brain disorders and their underlying mechanisms, this lesson lays out a case for multiscale neuroscience data integration.
In this tutorial on simulating whole-brain activity using Python, participants can follow along using corresponding code and repositories, learning the basics of neural oscillatory dynamics, evoked responses and EEG signals, ultimately leading to the design of a network model of whole-brain anatomical connectivity.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This talk gives a brief overview of current efforts to collect and share the Brain Reference Architecture (BRA) data involved in the construction of a whole-brain architecture that assigns functions to major brain organs.
This brief talk discusses the idea that music, as a naturalistic stimulus, offers a window into higher cognition and various levels of neural architecture.
In this short talk you will learn about The Neural System Laboratory, which aims to develop and implement new technologies for analysis of brain architecture, connectivity, and brain-wide gene and molecular level organization.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lecture presents an overview of functional brain parcellations, as well as a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation.
Neuronify is an educational tool meant to create intuition for how neurons and neural networks behave. You can use it to combine neurons with different connections, just like the ones we have in our brain, and explore how changes on single cells lead to behavioral changes in important networks. Neuronify is based on an integrate-and-fire model of neurons. This is one of the simplest models of neurons that exist. It focuses on the spike timing of a neuron and ignores the details of the action potential dynamics. These neurons are modeled as simple RC circuits. When the membrane potential is above a certain threshold, a spike is generated and the voltage is reset to its resting potential. This spike then signals other neurons through its synapses.
Neuronify aims to provide a low entry point to simulation-based neuroscience.
This lecture covers the description and characterization of an input-output relationship in a information-theoretic context.
This lesson is part 1 of 2 of a tutorial on statistical models for neural data.