This lesson describes the development of EEGLAB as well as to what extent it is used by the research community.
This lesson provides instruction as to how to build a processing pipeline in EEGLAB for a single participant.
Whereas the previous lesson of this course outlined how to build a processing pipeline for a single participant, this lesson discusses analysis pipelines for multiple participants simultaneously.
In addition to outlining the motivations behind preprocessing EEG data in general, this lesson covers the first step in preprocessing data with EEGLAB, importing raw data.
Continuing along the EEGLAB preprocessing pipeline, this tutorial walks users through how to import data events as well as EEG channel locations.
This tutorial demonstrates how to re-reference and resample raw data in EEGLAB, why such steps are important or useful in the preprocessing pipeline, and how choices made at this step may affect subsequent analyses.
In this tutorial, users learn about the various filtering options in EEGLAB, how to inspect channel properties for noisy signals, as well as how to filter out specific components of EEG data (e.g., electrical line noise).
This tutorial instructs users how to visually inspect partially pre-processed neuroimaging data in EEGLAB, specifically how to use the data browser to investigate specific channels, epochs, or events for removable artifacts, biological (e.g., eye blinks, muscle movements, heartbeat) or otherwise (e.g., corrupt channel, line noise).
This tutorial provides instruction on how to use EEGLAB to further preprocess EEG datasets by identifying and discarding bad channels which, if left unaddressed, can corrupt and confound subsequent analysis steps.
Users following this tutorial will learn how to identify and discard bad EEG data segments using the MATLAB toolbox EEGLAB.
This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.
This lecture contains an overview of the Australian Electrophysiology Data Analytics Platform (AEDAPT), how it works, how to scale it, and how it fits into the FAIR ecosystem.
To explore the challenges and the ethical issues raised by advances in do-it-yourself (DIY) neurotechnology, the Emerging Issues Task Force of the International Neuroethics Society organized a virtual panel discussion. The panel discussed neurotechnologies such as transcranial direct current stimulation (tDCS) and electroencephalogram (EEG) headsets and their ability to change the way we understand and alter our brains. Particular attention will be given to the use of neurotechnology by everyday people and the implications this has for regulatory oversight and citizen neuroscience.
This module covers many of the types of non-invasive neurotech and neuroimaging devices including electroencephalography (EEG), electromyography (EMG), electroneurography (ENG), magnetoencephalography (MEG), and more.
An introduction to data management, manipulation, visualization, and analysis for neuroscience. Students will learn scientific programming in Python, and use this to work with example data from areas such as cognitive-behavioral research, single-cell recording, EEG, and structural and functional MRI. Basic signal processing techniques including filtering are covered. The course includes a Jupyter Notebook and video tutorials.
Hierarchical Event Descriptors (HED) fill a major gap in the neuroinformatics standards toolkit, namely the specification of the nature(s) of events and time-limited conditions recorded as having occurred during time series recordings (EEG, MEG, iEEG, fMRI, etc.). Here, the HED Working Group presents an online INCF workshop on the need for, structure of, tools for, and use of HED annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis.
This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology.
The lesson introduces the Brain Imaging Data Structure (BIDS), the community standard for organizing, curating, and sharing neuroimaging and associated data. The session focuses on understanding the BIDS framework, learning its data structure and validation processes.
This session moves from BIDS basics into analysis workflows, focusing on how to turn raw, BIDS-organized data into derivatives using BIDS Apps and containers for reproducible processing. It compares end-to-end pipelines across fMRI and PET (and notes EEG/MEG), explains typical preprocessing choices, and shows how standardized inputs plus containerized tools (Docker/AppTainer) yield consistent, auditable outputs.
The session explains GDPR rules around data sharing for research in Europe, the distinction between law and ethics, and introduces practical solutions for securely sharing sensitive datasets. Researchers have more flexibility than commonly assumed: scientific research is considered a public interest task, so explicit consent for data sharing isn’t legally required, though transparency and informing participants remain ethically important. The talk also introduces publicneuro.eu, a controlled-access platform that enables sharing neuroimaging datasets with open metadata, DOIs, and customizable access restrictions while ensuring GDPR compliance.