The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.
The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.
The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.
A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Lecture on functional brain parcellations and a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation which were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high-level principles and practices for making digital objects, including data, software, and workflows, Findable, Accessible, Interoperable, and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining, and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry, and publishers through our programs and networks. In this session, we will hear some perspectives on FAIR neuroscience from some of these stakeholders who have been working to develop and use FAIR tools for neuroscience. We will engage in a discussion on questions such as: how is neuroscience doing with respect to FAIR? What have been the successes? What is currently very difficult? Where does neuroscience need to go?
This lecture covers FAIR atlases, from their background, their construction, and how they can be created in line with the FAIR principles.
Serving as good refresher, Shawn Grooms explains the maths and logic concepts that are important for programmers to understand, including sets, propositional logic, conditional statements, and more.
This compilation is courtesy of freeCodeCamp.
Linear algebra is the branch of mathematics concerning linear equations such as linear functions and their representations through matrices and vector spaces. As such, it underlies a huge variety of analyses in the neurosciences. This lesson provides a useful refresher which will facilitate the use of Matlab, Octave, and various matrix-manipulation and machine-learning software.
This lesson was created by RootMath.
This lecture covers the ethical implications of the use of pharmaceuticals to enhance brain functions and was part of the Neuro Day Workshop held by the NeuroSchool of Aix Marseille University.
The landscape of scientific research is changing. Today’s researchers need to participate in large-scale collaborations, obtain and manage funding, share data, publish, and undertake knowledge translation activities in order to be successful. As per these increasing demands, Science Management is now a vital piece of the environment.
Brought to you by the New Digital Infrastructure Organization.
In the past five years, researchers have seen a growing number of research data management (RDM) policies being implemented by funders, publishers, and institutions. One key element in meeting these requirements, particularly in terms of data discovery, is using metadata, which helps make research data findable, accessible, interoperable and reusable (the FAIR principles). This session discussed the secret life of your dataset metadata: the ways in which, for many years to come, it will work non-stop to foster the visibility, reach, and impact of your work. We explored how metadata will help your dataset travel through the global research infrastructure, and how data repositories and discovery services can use this (meta)data to help launch your dataset into the world.
Connect with us! Follow us on Twitter at @NDRIO_NOIRN and @PortageRDM_GDR.
For more information, visit our website: https://engagedri.ca/
Brought to you by the Canadian Association of Research Libraries.
Data management plans, or DMPs, are one of the foundations of good research data management. This DMP-focused webinar will be of interest to researchers, graduate students, librarians, and research support stakeholders, and will provide foundational information on developing DMPs. Topics covered will include the importance and benefits of DMPs, how they support research excellence, and what makes a ‘good’ DMP, as well as a detailed look at their standard content. Resources to help with the development of DMPs – including bilingual training materials, guidance documents and Exemplar DMPs – will be presented, as well as an update on the activities of the Portage DMP Expert Group, including forthcoming resources. A brief overview of the DMP Assistant platform will be provided, while a second separate session will deliver an in-depth look at the latest version of this platform, including its key features.
Speaker: James Doiron, Research Data Management Services Coordinator, University of Alberta Libraries
Brought to you by the Canadian Association of Research Libraries.
Data management plans, or DMPs, are one of the foundations of good research data management. Hosted by the University of Alberta Library and supported by the Portage Network, the DMP Assistant is a national, open, bilingual data management planning (DMP) tool to help researchers better manage their data throughout the lifespan of a project. The tool develops a DMP by prompting researchers to answer a number of key data management questions, supported by best-practice guidance and examples. Building on the preceding DMP-focused webinar, this session will be of interest to researchers, graduate students, librarians, and research support stakeholders. Participants will take an in-depth look at the newly launched DMP Assistant 2.0, including all of its enhanced key features for both end-users and institutional administrators, as well as a brief look at the future of the platform.
Speaker: Robyn Nicholson, Data Management Planning Coordinator, Portage Network
The Canadian Open Neuroscience Platform (CONP) Portal is a web interface that facilitates open science for the neuroscience community by simplifying global access to and sharing of datasets and tools. The Portal internalizes the typical cycle of a research project, beginning with data acquisition, followed by data processing with published tools, and ultimately the publication of results with a link to the original dataset.
In this video, Samir Das and Tristan Glatard give a short overview of the main features of the CONP Portal.
This lecture covers an Introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model.