Sessions from the INCF Neuroinformatics Assembly 2022 day 2.
This couse is the opening module for the University of Toronto's Krembil Centre for Neuroinformatics' virtual learning series Solving Problems in Mental Health Using Multi-Scale Computational Neuroscience. Lessons in this course introduce participants to the study of brain disorders, starting from elemental units like genes and neurons, eventually building up to whole-brain modelling and global activity patterns.
Difficulties experienced in understanding machine learning techniques often stem from lack of clarity concerning more basic statistical models and fundamental considerations, including the various regression models that can all be subsumed under the General Linear Model.
This module covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at NYU's Center for Data Science, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Pr
This module introduces computational neuroscience by simulating neurons according to the AdEx model. You will learn about generative modeling, dynamical systems, and F-I curves. The MATLAB code introduces live scripts and functions.
The goal of this module is to work with action potential data taken from a publicly available database. You will learn about spike counts, orientation tuning, and spatial maps. The MATLAB code introduces data types, for-loops and vectorizations, indexing, and data visualization.
This course consists of one lesson and one tutorial, focusing on the neural connectivity measures derived from neuroimaging, specifically from methods like functional magnetic resonance imaging (fMRI) and diffusion-weighted imaging (DWI). Additional tools such as tractography and parcellation are discussed in the context of brain connectivity and mental health. The tutorial leads participants through the computation of brain connectomes from fMRI data.
This course consists of two workshops which focus on the need for reproducibility in science, particularly under the umbrella roadmap of FAIR scienctific principles. The tutorials also provide an introduction to some of the most commonly used open-source scientific tools, including Git, GitHub, Google Colab, Binder, Docker, and the programming languages Python and R.
Given the extreme interconnectedness of the human brain, studying any one cerebral area in isolation may lead to spurious results or incomplete, if not problematic, interpretations. This course introduces participants to the various spatial scales of neuroscience and the fundamentals of whole-brain modelling, used to generate a more thorough picture of brain activity.
Sessions from the INCF Neuroinformatics Assembly 2022 day 1.
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.
Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees.
Data science relies on several important aspects of mathematics. In this course, you'll learn what forms of mathematics are most useful for data science, and see some worked examples of how math can solve important data science problems.
Notebook systems are proving invaluable to skill acquisition, research documentation, publication, and reproducibility. This series of presentations introduces the most popular platform for computational notebooks, Project Jupyter, as well as other resources like Binder and NeuroLibre.
The Neurodata Without Borders: Neurophysiology project (NWB, https://www.nwb.org/) is an effort to standardize the description and storage of neurophysiology data and metadata. NWB enables data sharing and reuse and reduces the energy-barrier to applying data analytics both within and across labs. Several laboratories, including the Allen Institute for Brain Science, have wholeheartedly adopted NWB.
Bayesian inference (using prior knowledge to generate more accurate predictions about future events or outcomes) has become increasingly applied to the fields of neuroscience and neuroinformatics. In this course, participants are taught how Bayesian statistics may be used to build cognitive models of processes like learning or perception. This course also offers theoretical and practical instruction on dynamic causal modeling as applied to fMRI and EEG data.
This course explores ethical and social issues that have arisen, and continue to arise, from the rapid research development in neuroscience, medicine, and ICT. Lectures focus on key ethical issues contained in the HBP – such as the ethics of robotics, dual use, ICT ethical issues, big data and individual privacy, and the use of animals in research.
This workshop delves into the need for, structure of, tools for, and use of hierarchical event descriptor (HED) annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis. HED are a controlled vocabulary of terms describing events in a machine-actionable form so that algorithms can use the information without manual recoding.
This is a freely available online course on neuroscience for people with a machine learning background. The aim is to bring together these two fields that have a shared goal in understanding intelligent processes. Rather than pushing for “neuroscience-inspired” ideas in machine learning, the idea is to broaden the conceptions of both fields to incorporate elements of the other in the hope that this will lead to new, creative thinking.
This course includes two tutorials on R, a programming language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, etc.) and graphical techniques, and is highly extensible.