The tutorial is intended primarily for beginners, but it will also beneficial to experimentalists who understand electroencephalography and event related techniques, but need additional knowledge in annotation, standardization, long-term storage and publication of data.
Introduction to the first phases of EEG/ERP data lifecycle
The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives. This lecture contains an overview of the Australian Electrophysiology Data Analytics Platform (AEDAPT), how it works, how to scale it, and how it fits into the FAIR ecosystem.
This module covers many of the types of non-invasive neurotech and neuroimaging devices including Electroencephalography (EEG), Electromyography (EMG), Electroneurography (ENG), Magnetoencephalography (MEG), functional Near-Infrared Spectroscopy (fNRIs), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Computed Tomography
Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
This lecture covers the linking neuronal activity to behavior using AI-based online detection.
Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.
The lesson was presented in the context of the BrainHack School 2020.
Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.
The lesson was presented in the context of the BrainHack School 2020.
Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.
The lesson was presented in the context of the BrainHack School 2020.
Dr. Guangyu Robert Yang describes how Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In comparison to traditional computational models in neuroscience, RNNs can offer substantial advantages at explaining complex behavior and neural activity patterns. Their use allows rapid generation of mechanistic hypotheses for cognitive computations. RNNs further provide a natural way to flexibly combine bottom-up biological knowledge with top-down computational goals into network models. However, early works of this approach are faced with fundamental challenges. In this talk, Dr. Guangyu Robert Yang discusses some of these challenges, and several recent steps that we took to partly address them and to build next-generation RNN models for cognitive neuroscience.
Serving as good refresher, Shawn Grooms explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.
This compilation is courtesy of freeCodeCamp.
Linear algebra is the branch of mathematics concerning linear equations such as linear functions and their representations through matrices and vector spaces. As such, it underlies a huge variety of analyses in the neurosciences. This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.
This lesson was created by RootMath.