Gael Varoquaux presents some advanced machine learning algorithms for neuroimaging, while addressing some real-world considerations related to data size and type.
The lesson was presented in the context of the BrainHack School 2020.
Tutorial on how to simulate brain tumor brains with TVB (reproducing publication: Marinazzo et al. 2020 Neuroimage). This tutorial comprises a didactic video, jupyter notebooks, and full data set for the construction of virtual brains from patients and health controls. Authors: Hannelore Aerts, Michael Schirner, Ben Jeurissen, DIrk Van Roost, Eric Achten, Petra Ritter, Daniele Marinazzo
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
As models in neuroscience have become increasingly complex, it has become more difficult to share all aspects of models and model analysis, hindering model accessibility and reproducibility. In this session, we will discuss existing resources for promoting FAIR data and models in computational neuroscience, their impact on the field, and the remaining barriers. This lecture covers the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
The course is an introduction to the field of electrophysiology standards, infrastructure, and initiatives. This lecture discusses the FAIR principles as they apply to electrophysiology data and metadata, the building blocks for community tools and standards, platforms and grassroots initiatives, and the challenges therein.
This session provides users with an introduction to tools and resources that facilitate the implementation of FAIR in their research.
This session will include presentations of infrastructure that embrace the FAIR principles developed by members of the INCF Community.
This lecture provides an overview of The Virtual Brain Simulation Platform.
Peer Herholz gives a tour of how popular virtualization tools like Docker and Singularity are playing a crucial role in improving reproducibility and enabling high-performance computing in neuroscience.
An overview of some of the essential concepts in neuropharmacology (e.g. receptor binding, agonism, antagonism), an introduction to pharmacodynamics and pharmacokinetics, and an overview of the drug discovery process relative to diseases of the Central Nervous System.
Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible, Interoperable, and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been the successes? What is currently very difficult? Where does neuroscience need to go? This lecture covers the biomedical researcher's perspective on FAIR data sharing and the importance of finding better ways to manage large datasets.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible, Interoperable, and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been the successes? What is currently very difficult? Where does neuroscience need to go? This lecture covers multiple aspects of FAIR neuroscience data: what makes it unique, the challenges to making it FAIR, the importance of overcoming these challenges, and how data governance comes into play.
Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the processes, benefits, and challenges involved in designing, collecting, and sharing FAIR neuroscience datasets.
Over the last three decades, neuroimaging research has seen large strides in the scale, diversity, and complexity of studies, the open availability of data and methodological resources, the quality of instrumentation and multimodal studies, and the number of researchers and consortia. The awareness of rigor and reproducibility has increased with the advent of funding mandates, and with the work done by national and international brain initiatives. This session will focus on the question of FAIRness in neuroimaging research touching on each of the FAIR elements through brief vignettes of ongoing research and challenges faced by the community to enact these principles. This lecture covers the benefits and difficulties involved when re-using open datasets, and how metadata is important to the process.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high level principles and practices for making digital objects, including data, software and workflows, Findable, Accessible, Interoperable and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been successes? What is currently very difficult? Where does neuroscience need to go?
This lecture will provide an overview of Addgene, a tool that embraces the FAIR principles developed by members of the INCF Community. This will include an overview of Addgene, their mission, and available resources.
The International Brain Initiative (IBI) is a consortium of the world’s major large-scale brain initiatives and other organizations with a vested interest in catalyzing and advancing neuroscience research through international collaboration and knowledge sharing. This session will introduce the IBI and the current efforts of the Data Standards and Sharing Working Group with a view to gain input from a wider neuroscience and neuroinformatics community
This lecture covers the IBI Data Standards and Sharing Working Group, including its history, aims, and projects.
The International Brain Initiative (IBI) is a consortium of the world’s major large-scale brain initiatives and other organizations with a vested interest in catalyzing and advancing neuroscience research through international collaboration and knowledge sharing. This session will introduce the IBI and the current efforts of the Data Standards and Sharing Working Group with a view to gain input from a wider neuroscience and neuroinformatics community. This session covers the framework of the International Brain Lab (IBL) and the data architecture used for this project.
The FOSTER portal has produced a number of guides to help implement Open Science practices in daily workflows, including The Open Science Training Handbook. It provides many basic definitions, concepts, and principles that are key components of open science, as well as general guidance for developing and implementing these practices in one's own research environments.
Topics include: