The workshop will include interactive seminars given by selected experts in the field covering all aspects of (FAIR) small animal MRI data acquisition, analysis, and sharing. The seminars will be followed by hands-on training where participants will perform use case scenarios using software established by the organizers. This will include an introduction to the basics of using command line interfaces, Python installation, working with Docker/Singularity containers, Datalad/Git, and BIDS.
This course, consisting of one lecture and two workshops, is presented by the Computational Genomics Lab at the Centre for Addiction and Mental Health and University of Toronto. The lecture deals with single-cell and bulk level transciptomics, while the two hands-on workshops introduce users to transcriptomic data types (e.g., RNAseq) and how to perform analyses in specific use cases (e.g., cellular changes in major depression).
This lecture series is presented by NeuroTechEU, an alliance between eight European universities with the goal to build a trans-European network of excellence in brain research and technologies. By following along with this series, participants will learn about the history of cognitive science and the development of the field in a sociocultural context, as well as its trajectory into the future with the advent of artificial intelligence and neural network development.
These lessons give an overview of the principles underpinning the objectives, policies, and practice of Open Science, including several representative policy documents that will be increasingly relevant to neuroscience research.
Notebook systems are proving invaluable to skill acquisition, research documentation, publication, and reproducibility. This series of presentations introduces the most popular platform for computational notebooks, Project Jupyter, as well as other resources like Binder and NeuroLibre.
The importance of Research Data Management in the conduct of open and reproducible science is better understood and technically supported than ever, and many of the underlying principles apply as much to everyday activities of a single researcher as to large-scale, multi-center open data sharing.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and remaining barriers.
This course consists of a three-part session from the second day of INCF's Neuroinformatics Assembly 2023. The lessons describe various on-going efforts within the fields of neuroinformatics and clinical neuroscience to adjust to the increasingly vast volumes of brain data being collected and stored.
As research methods and experimental technologies become ever more sophisticated, the amount of health-related data per individual which has become accessible is vast, giving rise to a corresponding need for cross-domain data integration, whole-person modelling, and improved precision medicine. This course provides lessons describing state of the art methods and repositories, as well as a tutorial on computational methods for data integration.
This module provides an introduction to the motivation of deep learning and its history and inspiration.
This module covers fMRI data, including creating and interpreting flatmaps, exploring variability and average responses, and visual eccenticity. You will learn about processing BOLD signals, trial-averaging, and t-tests. The MATLAB code introduces data animations, multicolor visualizations, and linear indexing.
The Neurodata Without Borders: Neurophysiology project (NWB:N, https://www.nwb.org/) is an effort to standardize the description and storage of neurophysiology data and metadata. NWB enables data sharing and reuse and reduces the energy barrier to applying data analytics both within and across labs. Several laboratories, including the Allen Institute for Brain Science, have wholeheartedly adopted NWB.
Given the extreme interconnectedness of the human brain, studying any one cerebral area in isolation may lead to spurious results or incomplete, if not problematic, interpretations. This course introduces participants to the various spatial scales of neuroscience and the fundamentals of whole-brain modelling, used to generate a more thorough picture of brain activity.
This module covers the concepts of model predictive control, emulation of the kinematics from observations, training a policy, and predictive policy learning under uncertainty. It is a part of the Deep Learning Course at NYU's Center for Data Science, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer
Get up to speed about the fundamental principles of full brain network modeling using the open-source neuroinformatics platform The Virtual Brain (TVB). This simulation environment enables the biologically realistic modeling of whole-brain network dynamics across different brain scales, using personalized structural connectome-based approach.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and remaining barriers.
This workshop is organized by the German National Research Data Infrastructure Initiative Neuroscience (NFDI-Neuro). The initiative is community driven and comprises around 50 contributing national partners and collaborators. NFDI-Neuro partners with EBRAINS AISB, the coordinating entity of the EU Human Brain Project and the EBRAINS infrastructure. We will introduce common methods that enable digital reproducible neuroscience.
Neurohackademy is a two-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. Participants learn about technologies used to analyze human neuroscience data, and to make analyses and results shareable and reproducible.
The emergence of data-intensive science creates a demand for neuroscience educators worldwide to deliver better neuroinformatics education and training in order to raise a generation of modern neuroscientists with FAIR capabilities, awareness of the value of standards and best practices, knowledge in dealing with big datasets, and the ability to integrate knowledge over multiple scales and methods.
This course consists of several lightning talks from the second day of INCF's Neuroinformatics Assembly 2023. Covering a wide range of topics, these brief talks provide snapshots of various neuroinformatic efforts such as brain-computer interface standards, dealing with multimodal animal MRI datasets, distributed data management, and several more.