“Computational Thinking“ refers to a mindset or set of tools used by computational or ICT specialists to describe their work. This course is intended for people outside of the ICT field to allow students to understand the way that computer specialists analyse problems and to introduce students to the basic terminology of the field.
The emergence of data-intensive science creates a demand for neuroscience educators worldwide to deliver better neuroinformatics education and training in order to raise a generation of modern neuroscientists with FAIR capabilities, awareness of the value of standards and best practices, knowledge in dealing with big datasets, and the ability to integrate knowledge over multiple scales and methods.
This course covers the concepts of recurrent and convolutional nets (theory and practice), natural signals properties and the convolution, and recurrent neural networks (vanilla and gated, LSTM).
This course includes both lectures and tutorials around the management and analysis of genomic data in clinical research and care. Participants are led through the basics of genome-wide association studies (GWAS), genotypes, and polygenic risk scores, as well as novel concepts and tools for more sophisticated consideration of population stratification in GWAS.
The dimensionality and size of datasets in many fields of neuroscience research require massively parallel computing power. Fortunately, the maturity and accessibility of virtualization technologies has made it feasible to run the same analysis environments on platforms ranging from single laptop computers up to high-performance computing networks.
The lecture series focuses on current trends in modern techniques in neuroscience. Inspiring scientists from the NeurotechEU Alliance will give an overview of the latest advances and developments.
This course corresponds to the first session of talks given at INCF's Neuroinformatics Assembly 2023. The sessions consists of several lectures, focusing on using the principles of FAIR (findability, accessibility, interoperability, and reusability) to inform future directions in neuroscience and neuroinformatics. In particular, these talks deal with the development of knowledge graphs and ontologies.
In this course we present the TVB-EBRAINS integrated workflows that have been developed in the Human Brain Project in the third funding phase (“SGA2”) in the Co-Design Project 8 “The Virtual Brain”.
Bayesian inference (using prior knowledge to generate more accurate predictions about future events or outcomes) has become increasingly applied to the fields of neuroscience and neuroinformatics. In this course, participants are taught how Bayesian statistics may be used to build cognitive models of processes like learning or perception. This course also offers theoretical and practical instruction on dynamic causal modeling as applied to fMRI and EEG data.
Course designed for advanced learners interested in understanding the foundations of Machine Learning in Python.
General: The course consists of 15 lectures (ca. 1-2 hours each) and 15 exercise sheets (for ca. 6 hours of programming each).
Institution: High-Performance Computing and Analytics Lab, University of Bonn
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.
Sessions from the INCF Neuroinformatics Assembly 2022 day 1.
Notebook systems are proving invaluable to skill acquisition, research documentation, publication, and reproducibility. This series of presentations introduces the most popular platform for computational notebooks, Project Jupyter, as well as other resources like Binder and NeuroLibre.
This course contains sessions from the first day of INCF's Neuroinformatics Assembly 2022.
This course corresponds to the first session of talks given at INCF's Neuroinformatics Assembly 2023. The sessions consists of several lectures, focusing on using the principles of FAIR (findability, accessibility, interoperability, and reusability) to inform future directions in neuroscience and neuroinformatics. In particular, these talks deal with the development of knowledge graphs and ontologies.
In this short course, you will learn about Jupyter Notebooks, an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
This course provides a general overview about brain simulation, including its fundamentals as well as clinical applications in populations with stroke, neurodegeneration, epilepsy, and brain tumors. This course also introduces the mathematical framework of multi-scale brain modeling and its analysis.
Ethical conduct of science, good governance of data, and accelerated translation to the clinic are key to high-calibre open neuroscience. Everyday practitioners of science must be sensitized to a range of ethical considerations in their research, some having especially to do with open data-sharing. The lessons included in this course introduce a number of these topics and end with concrete guidance for participant consent and de-identification of data.
This course consists of one lesson and one tutorial, focusing on the neural connectivity measures derived from neuroimaging, specifically from methods like functional magnetic resonance imaging (fMRI) and diffusion-weighted imaging (DWI). Additional tools such as tractography and parcellation are discussed in the context of brain connectivity and mental health. The tutorial leads participants through the computation of brain connectomes from fMRI data.
Notebook systems are proving invaluable to skill acquisition, research documentation, publication, and reproducibility. This series of presentations introduces the most popular platform for computational notebooks, Project Jupyter, as well as other resources like Binder and NeuroLibre.