This course contains sessions from the second day of INCF's Neuroinformatics Assembly 2022.
This course offers lectures on the origin and functional significance of certain electrophysiological signals in the brain, as well as a hands-on tutorial on how to simulate, statistically evaluate, and visualize such signals. Participants will learn the simulation of signals at different spatial scales, including single-cell (neuronal spiking) and global (EEG), and how these may serve as biomarkers in the evaluation of mental health data.
This course consists of two workshops which focus on the need for reproducibility in science, particularly under the umbrella roadmap of FAIR scienctific principles. The tutorials also provide an introduction to some of the most commonly used open-source scientific tools, including Git, GitHub, Google Colab, Binder, Docker, and the programming languages Python and R.
This workshop delves into the need for, structure of, tools for, and use of hierarchical event descriptor (HED) annotation to prepare neuroimaging time series data for storing, sharing, and advanced analysis. HED are a controlled vocabulary of terms describing events in a machine-actionable form so that algorithms can use the information without manual recoding.
This course consists of several lightning talks from the second day of INCF's Neuroinformatics Assembly 2023. Covering a wide range of topics, these brief talks provide snapshots of various neuroinformatic efforts such as brain-computer interface standards, dealing with multimodal animal MRI datasets, distributed data management, and several more.
This course consists of two workshops which focus on the need for reproducibility in science, particularly under the umbrella roadmap of FAIR scienctific principles. The tutorials also provide an introduction to some of the most commonly used open-source scientific tools, including Git, GitHub, Google Colab, Binder, Docker, and the programming languages Python and R.
This course contains sessions from the first day of INCF's Neuroinformatics Assembly 2022.
The Neurodata Without Borders: Neurophysiology project (NWB, https://www.nwb.org/) is an effort to standardize the description and storage of neurophysiology data and metadata. NWB enables data sharing and reuse and reduces the energy-barrier to applying data analytics both within and across labs. Several laboratories, including the Allen Institute for Brain Science, have wholeheartedly adopted NWB.
Neuromatch Academy aims to introduce traditional and emerging tools of computational neuroscience to trainees.
Course designed for advanced learners interested in understanding the foundations of Machine Learning in Python.
General: The course consists of 15 lectures (ca. 1-2 hours each) and 15 exercise sheets (for ca. 6 hours of programming each).
Institution: High-Performance Computing and Analytics Lab, University of Bonn
This module covers fMRI data, including creating and interpreting flatmaps, exploring variability and average responses, and visual eccenticity. You will learn about processing BOLD signals, trial-averaging, and t-tests. The MATLAB code introduces data animations, multicolor visualizations, and linear indexing.
This course consists of three lessons, each corresponding to a lightning talk given at the first day of INCF's Neuroinformatics Assembly 2023. By following along these brief talks, you will hear about topics such as open source tools for computer vision, tools for the integration of various MRI dataset formats, as well as international data governance.
This course corresponds to the first session of talks given at INCF's Neuroinformatics Assembly 2023. The sessions consists of several lectures, focusing on using the principles of FAIR (findability, accessibility, interoperability, and reusability) to inform future directions in neuroscience and neuroinformatics. In particular, these talks deal with the development of knowledge graphs and ontologies.
This is a freely available online course on neuroscience for people with a machine learning background. The aim is to bring together these two fields that have a shared goal in understanding intelligent processes. Rather than pushing for “neuroscience-inspired” ideas in machine learning, the idea is to broaden the conceptions of both fields to incorporate elements of the other in the hope that this will lead to new, creative thinking.
As research methods and experimental technologies become ever more sophisticated, the amount of health-related data per individual which has become accessible is vast, giving rise to a corresponding need for cross-domain data integration, whole-person modelling, and improved precision medicine. This course provides lessons describing state of the art methods and repositories, as well as a tutorial on computational methods for data integration.
This working group is a collaboration between OCNS and INCF. The group focuses on evaluating and testing computational neuroscience tools; finding them, testing them, learning how they work, and informing developers of issues to ensure that these tools remain in good shape by having communities looking after them. Since many members of the WG are themselves tool developers, we will also learn from each other and will work towards improving interoperability between related tools.
Sessions from the INCF Neuroinformatics Assembly 2022 day 1.
In this course, you will learn about working with calcium-imaging data, including image processing to remove background "blur", identifying cells based on threshold spatial contiguity, time-series filtering, and principal component analysis (PCA). The MATLAB code shows data animations, capabilities of the image processing toolbox, and PCA.
This module covers fMRI data, including creating and interpreting flatmaps, exploring variability and average responses, and visual eccenticity. You will learn about processing BOLD signals, trial-averaging, and t-tests. The MATLAB code introduces data animations, multicolor visualizations, and linear indexing.
The dimensionality and size of datasets in many fields of neuroscience research require massively parallel computing power. Fortunately, the maturity and accessibility of virtualization technologies has made it feasible to run the same analysis environments on platforms ranging from single laptop computers up to high-performance computing networks.