In this course, you will learn about working with calcium-imaging data, including image processing to remove background "blur", identifying cells based on threshold spatial contiguity, time-series filtering, and principal component analysis (PCA). The MATLAB code shows data animations, capabilities of the image processing toolbox, and PCA.
Bayesian inference (using prior knowledge to generate more accurate predictions about future events or outcomes) has become increasingly applied to the fields of neuroscience and neuroinformatics. In this course, participants are taught how Bayesian statistics may be used to build cognitive models of processes like learning or perception. This course also offers theoretical and practical instruction on dynamic causal modeling as applied to fMRI and EEG data.
This course consists of a three-part session from the second day of INCF's Neuroinformatics Assembly 2023. The lessons describe various on-going efforts within the fields of neuroinformatics and clinical neuroscience to adjust to the increasingly vast volumes of brain data being collected and stored.
As research methods and experimental technologies become ever more sophisticated, the amount of health-related data per individual which has become accessible is vast, giving rise to a corresponding need for cross-domain data integration, whole-person modelling, and improved precision medicine. This course provides lessons describing state of the art methods and repositories, as well as a tutorial on computational methods for data integration.
The importance of Research Data Management in the conduct of open and reproducible science is better understood and technically supported than ever, and many of the underlying principles apply as much to everyday activities of a single researcher as to large-scale, multi-center open data sharing.
The dimensionality and size of datasets in many fields of neuroscience research require massively parallel computing power. Fortunately, the maturity and accessibility of virtualization technologies has made it feasible to run the same analysis environments on platforms ranging from single laptop computers up to high-performance computing networks.
This course features tutorials on how to use Allen atlases and digital brain atlasing tools, including operational and user features of the Allen Mouse Brain Atlas, as well as the Allen Institute's 3D viewing tool, Brain Explorer®.
Neuroscience has traditionally been a discipline where isolated labs have produced their own experimental data and created their own models to interpret their findings. However, it is becoming clear that no one lab can create cell and network models rich enough to address all the relevant biological questions, or to generate and analyse all the data required to inform, constrain, and test these models.
The Neurodata Without Borders: Neurophysiology project (NWB, https://www.nwb.org/) is an effort to standardize the description and storage of neurophysiology data and metadata. NWB enables data sharing and reuse and reduces the energy-barrier to applying data analytics both within and across labs. Several laboratories, including the Allen Institute for Brain Science, have wholeheartedly adopted NWB.
Given the extreme interconnectedness of the human brain, studying any one cerebral area in isolation may lead to spurious results or incomplete, if not problematic, interpretations. This course introduces participants to the various spatial scales of neuroscience and the fundamentals of whole-brain modelling, used to generate a more thorough picture of brain activity.
Neurohackademy is a two-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. Participants learn about technologies used to analyze human neuroscience data, and to make analyses and results shareable and reproducible.
Neurohackademy is a two-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. Participants learn about technologies used to analyze human neuroscience data, and to make analyses and results shareable and reproducible.
Presented by the Neuroscience Information Framework (NIF), this series consists of several lectures characterizing cutting-edge, open-source software platforms and computational tools for neuroscientists. This course offers detailed descriptions of various neuroinformatic resources such as cloud-computing services, web-based annotation tools, genome browsers, and platforms for designing and building biophysically detailed models of neurons and neural ensembles.
This course provides a general overview about brain simulation, including its fundamentals as well as clinical applications in populations with stroke, neurodegeneration, epilepsy, and brain tumors. This course also introduces the mathematical framework of multi-scale brain modeling and its analysis.
Given the extreme interconnectedness of the human brain, studying any one cerebral area in isolation may lead to spurious results or incomplete, if not problematic, interpretations. This course introduces participants to the various spatial scales of neuroscience and the fundamentals of whole-brain modelling, used to generate a more thorough picture of brain activity.
This module covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at NYU's Center for Data Science, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.
This course consists of brief tutorials on OpenNeuro.org, a free and open platform for analyzing and sharing neuroimaging data. During this course, you will learn how to deal with your neuroscientific datasets using OpenNeuro.org for operations such as uploading and version control, as well as how to analyze and share your data.
This course consists of introductory lectures on different aspects of biochemical models. By following this course, you will learn about the various forms plasticity can take at different levels in the brain, how to model chemical computation in the brain, as well as computationally demanding studies of synaptic plasticity on the molecular level.
This course offers lectures on the origin and functional significance of certain electrophysiological signals in the brain, as well as a hands-on tutorial on how to simulate, statistically evaluate, and visualize such signals. Participants will learn the simulation of signals at different spatial scales, including single-cell (neuronal spiking) and global (EEG), and how these may serve as biomarkers in the evaluation of mental health data.