This lecture contains an overview of the Australian Electrophysiology Data Analytics Platform (AEDAPT), how it works, how to scale it, and how it fits into the FAIR ecosystem.
As researchers develop new non-invasive direct-to-consumer technologies that read and stimulate the brain, society must consider the appropriate uses of such devices. Will these brain technologies eventually allow enhancement of abilities beyond human capabilities? In what settings are people using these devices outside the purview of researchers or clinicians? Should consumers be allowed to ‘hack’ their own brain in order to improve performance?
To explore these challenges and the ethical issues raised by advances in do-it-yourself (DIY) neurotechnology, the Emerging Issues Task Force of the International Neuroethics Society organized a virtual panel discussion. The panel discussed neurotechnologies such as transcranial direct current stimulation (tDCS) and electroencephalogram (EEG) headsets and their ability to change the way we understand and alter our brains. Particular attention will be given to the use of neurotechnology by everyday people and the implications this has for regulatory oversight and citizen neuroscience.
Panelists included:
This module covers many of the types of non-invasive neurotech and neuroimaging devices including Electroencephalography (EEG), Electromyography (EMG), Electroneurography (ENG), Magnetoencephalography (MEG), functional Near-Infrared Spectroscopy (fNRIs), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Computed Tomography
An introduction to data management, manipulation, visualization, and analysis for neuroscience. Students will learn scientific programming in Python, and use this to work with example data from areas such as cognitive-behavioral research, single-cell recording, EEG, and structural and functional MRI. Basic signal processing techniques including filtering are covered. The course includes a Jupyter Notebook and video tutorials.
Hierarchical Event Descriptors (HED) fill a major gap in the neuroinformatics standards toolkit, namely the specification of the nature(s) of events and time-limited conditions recorded as having occurred during time series recordings (EEG, MEG, iEEG, fMRI, etc.). Here, the HED Working Group presents an online INCF workshop on the need for, structure of, tools for, and use of HED annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis.
Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Neuronify is an educational tool meant to create intuition for how neurons and neural networks behave. You can use it to combine neurons with different connections, just like the ones we have in our brain, and explore how changes on single cells lead to behavioral changes in important networks. Neuronify is based on an integrate-and-fire model of neurons. This is one of the simplest models of neurons that exist. It focuses on the spike timing of a neuron and ignores the details of the action potential dynamics. These neurons are modeled as simple RC circuits. When the membrane potential is above a certain threshold, a spike is generated and the voltage is reset to its resting potential. This spike then signals other neurons through its synapses.
Neuronify aims to provide a low entry point to simulation-based neuroscience.
This lecture covers the linking neuronal activity to behavior using AI-based online detection.
This lesson gives an in-depth introduction of ethics in the field of artificial intelligence, particularly in the context of its impact on humans and public interest. As the healthcare sector becomes increasingly affected by the implementation of ever stronger AI algorithms, this lecture covers key interests which must be protected going forward, including privacy, consent, human autonomy, inclusiveness, and equity.
This lesson describes a definitional framework for fairness and health equity in the age of the algorithm. While acknowledging the impressive capability of machine learning to positively affect health equity, this talk outlines potential (and actual) pitfalls which come with such powerful tools, ultimately making the case for collaborative, interdisciplinary, and transparent science as a way to operationalize fairness in health equity.
This lecture covers self-supervision as it relates to neural data tasks and the Mine Your Own vieW (MYOW) approach.
As a part of NeuroHackademy 2020, Elizabeth DuPre gives a lecture on "Nilearn", a python package that provides flexible statistical and machine-learning tools for brain volumes by leveraging the scikit-learn Python toolbox for multivariate statistics. This includes predictive modelling, classification, decoding, and connectivity analysis.
This video is courtesy of the University of Washington eScience Institute.
Estefany Suárez provides a conceptual overview of the rudiments of machine learning, including its bases in traditional statistics and the types of questions it might be applied to.
The lesson was presented in the context of the BrainHack School 2020.
Jake Vogel gives a hands-on, Jupyter-notebook-based tutorial to apply machine learning in Python to brain-imaging data.
The lesson was presented in the context of the BrainHack School 2020.
Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.
The lesson was presented in the context of the BrainHack School 2020.
This lesson from freeCodeCamp introduces Scikit-learn, the most widely used machine learning Python library.
Dr. Guangyu Robert Yang describes how Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In comparison to traditional computational models in neuroscience, RNNs can offer substantial advantages at explaining complex behavior and neural activity patterns. Their use allows rapid generation of mechanistic hypotheses for cognitive computations. RNNs further provide a natural way to flexibly combine bottom-up biological knowledge with top-down computational goals into network models. However, early works of this approach are faced with fundamental challenges. In this talk, Dr. Guangyu Robert Yang discusses some of these challenges, and several recent steps that we took to partly address them and to build next-generation RNN models for cognitive neuroscience.
Introduction to the Mathematics chapter of Datalabcc's "Foundations in Data Science" series.
Primer on elementary algebra