The lecture series focuses on current trends in modern techniques in neuroscience. Inspiring scientists from the NeurotechEU Alliance will give an overview of the latest advances and developments.
This course outlines how versioning code, data, and analysis software is crucially important to rigorous and open neuroscience workflows that maximize reproducibility and minimize errors.Version control systems, code-capable notebooks, and virtualization containers such as Git, Jupyter, and Docker, respectively, have become essential tools in data science.
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.
This module covers the concept of associative memories in deep learning. It is a part of the Deep Learning Course at NYU's Center for Data Science. Prerequisites for this module include: Introduction to Deep Learning (module 1 of the course), Parameter Sharing (module 2 of the course),
The workshop will include interactive seminars given by selected experts in the field covering all aspects of (FAIR) small animal MRI data acquisition, analysis, and sharing. The seminars will be followed by hands-on training where participants will perform use case scenarios using software established by the organizers. This will include an introduction to the basics of using command line interfaces, Python installation, working with Docker/Singularity containers, Datalad/Git, and BIDS.
This workshop provides basic knowledge on personalized brain network modeling using the open-source simulation platform The Virtual Brain (TVB). Participants will gain theoretical knowledge and apply this knowledge to construct brain models, process multimodal neuroimaging data for reconstructing individual brains, run simulations, and use supporting neuroinformatics tools such as collaboratories, pipelines, workflows, and data repositories.
Sessions from the INCF Neuroinformatics Assembly 2022 day 2.
Notebook systems are proving invaluable to skill acquisition, research documentation, publication, and reproducibility. This series of presentations introduces the most popular platform for computational notebooks, Project Jupyter, as well as other resources like Binder and NeuroLibre.
This module provides an introduction to the motivation of deep learning and its history and inspiration.
This module covers the concepts of gradient descent, stochastic gradient descent, and momentum. It is a part of the Deep Learning Course at NYU's Center for Data Science, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Pr
Sessions from the INCF Neuroinformatics Assembly 2022 day 2.
Sessions from the INCF Neuroinformatics Assembly 2022 day 1.
Most who enter the field of computational neuroscience have a prior background in either mathematics, physics, computer science, or (neuro)biology. Since computational neuroscience requires a bit of knowledge from all these fields, with some basic knowledge of neurons and a familiarity with certain types of equations and mathematical concepts, we recommend two different "starting tracks" depending on the student's background before you begin the lectures listed below:
This course covers the concepts of recurrent and convolutional nets (theory and practice), natural signals properties and the convolution, and recurrent neural networks (vanilla and gated, LSTM).
This is a freely available online course on neuroscience for people with a machine learning background. The aim is to bring together these two fields that have a shared goal in understanding intelligent processes. Rather than pushing for “neuroscience-inspired” ideas in machine learning, the idea is to broaden the conceptions of both fields to incorporate elements of the other in the hope that this will lead to new, creative thinking.
This course consists of two workshops which focus on the need for reproducibility in science, particularly under the umbrella roadmap of FAIR scienctific principles. The tutorials also provide an introduction to some of the most commonly used open-source scientific tools, including Git, GitHub, Google Colab, Binder, Docker, and the programming languages Python and R.
This course contains videos, lectures, and hands-on tutorials as part of INCF's Neuroinformatics Assembly 2023 workshop on developing robust and reproducible research workflows to foster greater collaborative efforts in neuroscience.
Course designed for advanced learners interested in understanding the foundations of Machine Learning in Python.
General: The course consists of 15 lectures (ca. 1-2 hours each) and 15 exercise sheets (for ca. 6 hours of programming each).
Institution: High-Performance Computing and Analytics Lab, University of Bonn
This course contains sessions from the first day of INCF's Neuroinformatics Assembly 2022.
As technological improvements continue to facilitate innovations in the mental health space, researchers and clinicians are faced with novel opportunities and challenges regarding study design, diagnoses, treatments, and follow-up care. This course includes a lecture outlining these new developments, as well as a workshop which introduces users to Synapse, an open-source platform for collaborative data analysis.