The landscape of scientific research is changing. Today’s researchers need to participate in large-scale collaborations, obtain and manage funding, share data, publish, and undertake knowledge translation activities in order to be successful. As per these increasing demands, Science Management is now a vital piece of the environment. This course consists of lectures presenting practical techniques, tools, and project management skills that participants can begin to implement.
This course includes two tutorials on R, a programming language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, etc.) and graphical techniques, and is highly extensible.
EEGLAB is an interactive MATLAB toolbox for processing continuous and event-related EEG, MEG, and other electrophysiological data. In this course, you will learn about features incorporated into EEGLAB, including independent component analysis (ICA), time/frequency analysis, artifact rejection, event-related statistics, and several useful modes of visualization of the averaged and single-trial data. EEGLAB runs under Linux, Unix, Windows, and Mac OS X.
This course, arranged by EPFL and also available as a MOOC on edX, aims for a mechanistic description of mammalian brain function at the level of individual nerve cells and their synaptic interactions.
In this short course, you will learn about Jupyter Notebooks, an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
In this module, you will work with human EEG data recorded during a steady-state visual evoked potential study (SSVEP, aka flicker). You will learn about spectral analysis, alpha activity, and topographical mapping. The MATLAB code introduces functions, sorting, and correlation analysis.
Sessions from the INCF Neuroinformatics Assembly 2022 Day 3.
This course consists of several lightning talks from the second day of INCF's Neuroinformatics Assembly 2023. Covering a wide range of topics, these brief talks provide snapshots of various neuroinformatic efforts such as brain-computer interface standards, dealing with multimodal animal MRI datasets, distributed data management, and several more.
The emergence of data-intensive science creates a demand for neuroscience educators worldwide to deliver better neuroinformatics education and training in order to raise a generation of modern neuroscientists with FAIR capabilities, awareness of the value of standards and best practices, knowledge in dealing with big datasets, and the ability to integrate knowledge over multiple scales and methods.
This course includes both lectures and tutorials around the management and analysis of genomic data in clinical research and care. Participants are led through the basics of genome-wide association studies (GWAS), genotypes, and polygenic risk scores, as well as novel concepts and tools for more sophisticated consideration of population stratification in GWAS.
Sessions from the INCF Neuroinformatics Assembly 2022 day 2.
In this course we present the TVB-EBRAINS integrated workflows that have been developed in the Human Brain Project in the third funding phase (“SGA2”) in the Co-Design Project 8 “The Virtual Brain”.
Standards and best practices make neuroscience a data-centric discipline and are key for integrating diverse data and for developing a robust, effective, and sustainable infrastructure to support open and reproducible neuroscience. This study track provides an introduction to standards and best practices that support the FAIR Principles.
This course offers lectures on the origin and functional significance of certain electrophysiological signals in the brain, as well as a hands-on tutorial on how to simulate, statistically evaluate, and visualize such signals. Participants will learn the simulation of signals at different spatial scales, including single-cell (neuronal spiking) and global (EEG), and how these may serve as biomarkers in the evaluation of mental health data.
This course is currently under construction but will coming soon. It will give an overview of the world of scientific publishing, spanning from traditional formats, to open to access, to open, interactive, reproducible, and 'living' publications with modifiable and executable code.
These lessons give an overview of the principles underpinning the objectives, policies, and practice of Open Science, including several representative policy documents that will be increasingly relevant to neuroscience research.
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.
This module covers the concepts of model predictive control, emulation of the kinematics from observations, training a policy, and predictive policy learning under uncertainty. It is a part of the Deep Learning Course at NYU's Center for Data Science, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with appli
Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees.
The emergence of data-intensive science creates a demand for neuroscience educators worldwide to deliver better neuroinformatics education and training in order to raise a generation of modern neuroscientists with FAIR capabilities, awareness of the value of standards and best practices, knowledge in dealing with big datasets, and the ability to integrate knowledge over multiple scales and methods.