Much like neuroinformatics, data science uses techniques from computational science to derive meaningful results from large complex datasets. In this session, we will explore the relationship between neuroinformatics and data science, by emphasizing a range of data science approaches and activities, ranging from the development and application of statistical methods, through the establishment of communities and platforms, and through the implementation of open-source software tools.
This course consists of two workshops which focus on the need for reproducibility in science, particularly under the umbrella roadmap of FAIR scienctific principles. The tutorials also provide an introduction to some of the most commonly used open-source scientific tools, including Git, GitHub, Google Colab, Binder, Docker, and the programming languages Python and R.
This course includes two tutorials on R, a programming language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, etc.) and graphical techniques, and is highly extensible.
This module provides an introduction to the motivation of deep learning and its history and inspiration.
This workshop is organized by the German National Research Data Infrastructure Initiative Neuroscience (NFDI-Neuro). The initiative is community driven and comprises around 50 contributing national partners and collaborators. NFDI-Neuro partners with EBRAINS AISB, the coordinating entity of the EU Human Brain Project and the EBRAINS infrastructure. We will introduce common methods that enable digital reproducible neuroscience.
The workshop will include interactive seminars given by selected experts in the field covering all aspects of (FAIR) small animal MRI data acquisition, analysis, and sharing. The seminars will be followed by hands-on training where participants will perform use case scenarios using software established by the organizers. This will include an introduction to the basics of using command line interfaces, Python installation, working with Docker/Singularity containers, Datalad/Git, and BIDS.
Course designed for advanced learners interested in understanding the foundations of Machine Learning in Python.
General: The course consists of 15 lectures (ca. 1-2 hours each) and 15 exercise sheets (for ca. 6 hours of programming each).
Institution: High-Performance Computing and Analytics Lab, University of Bonn
This working group is a collaboration between OCNS and INCF. The group focuses on evaluating and testing computational neuroscience tools; finding them, testing them, learning how they work, and informing developers of issues to ensure that these tools remain in good shape by having communities looking after them. Since many members of the WG are themselves tool developers, we will also learn from each other and will work towards improving interoperability between related tools.
As research methods and experimental technologies become ever more sophisticated, the amount of health-related data per individual which has become accessible is vast, giving rise to a corresponding need for cross-domain data integration, whole-person modelling, and improved precision medicine. This course provides lessons describing state of the art methods and repositories, as well as a tutorial on computational methods for data integration.
This lecture series is presented by NeuroTechEU, an alliance between eight European universities with the goal to build a trans-European network of excellence in brain research and technologies. By following along with this series, participants will learn about the history of cognitive science and the development of the field in a sociocultural context, as well as its trajectory into the future with the advent of artificial intelligence and neural network development.
This course consists of a three-part session from the second day of INCF's Neuroinformatics Assembly 2023. The lessons describe various on-going efforts within the fields of neuroinformatics and clinical neuroscience to adjust to the increasingly vast volumes of brain data being collected and stored.
In this course, you will learn about working with calcium-imaging data, including image processing to remove background "blur", identifying cells based on threshold spatial contiguity, time-series filtering, and principal component analysis (PCA). The MATLAB code shows data animations, capabilities of the image processing toolbox, and PCA.
This course tackles the issue of maintaining ethical research and healthcare practices in the age of increasingly powerful technological tools like machine learning and artificial intelligence. While there is great potential for innovation and improvement in the clinical space thanks to AI development, lecturers in this course advocate for a greater emphasis on human-centric care, calling for algorithm design which takes the full intersectionality of individuals into account.
The emergence of data-intensive science creates a demand for neuroscience educators worldwide to deliver better neuroinformatics education and training in order to raise a generation of modern neuroscientists with FAIR capabilities, awareness of the value of standards and best practices, knowledge in dealing with big datasets, and the ability to integrate knowledge over multiple scales and methods.
Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives.
This couse is the opening module for the University of Toronto's Krembil Centre for Neuroinformatics' virtual learning series Solving Problems in Mental Health Using Multi-Scale Computational Neuroscience. Lessons in this course introduce participants to the study of brain disorders, starting from elemental units like genes and neurons, eventually building up to whole-brain modelling and global activity patterns.
The importance of Research Data Management in the conduct of open and reproducible science is better understood and technically supported than ever, and many of the underlying principles apply as much to everyday activities of a single researcher as to large-scale, multi-center open data sharing.
The Virtual Brain EduPack provides didactic use cases for The Virtual Brain (TVB). Typically a use case consists of a jupyter notebook and a didactic video. EduPack use cases help the user to reproduce TVB-based publications or to get started quickly with TVB.
This course contains videos, lectures, and hands-on tutorials as part of INCF's Neuroinformatics Assembly 2023 workshop on developing robust and reproducible research workflows to foster greater collaborative efforts in neuroscience.
This course, consisting of one lecture and two workshops, is presented by the Computational Genomics Lab at the Centre for Addiction and Mental Health and University of Toronto. The lecture deals with single-cell and bulk level transciptomics, while the two hands-on workshops introduce users to transcriptomic data types (e.g., RNAseq) and how to perform analyses in specific use cases (e.g., cellular changes in major depression).