The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.
The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.
The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.
A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
This lecture on multi-scale entropy by Jil Meier is part of the TVB Node 10 series, a 4 day workshop dedicated to learning about The Virtual Brain, brain imaging, brain simulation, personalised brain models, TVB use cases, etc. TVB is a full brain simulation platform.
This lecture on modeling epilepsy using TVB by Julie Courtiol is part of the TVB Node 10 series, a 4 day workshop dedicated to learning about The Virtual Brain, brain imaging, brain simulation, personalised brain models, TVB use cases, etc. TVB is a full brain simulation platform.
This module explores sensation in the brain: what organs are involved, sensory pathways, processing centers, and theories of integration. We cover sensory transduction, vision, audition olfaction, gustation, and somatosensation.
This module covers how the brain interacts with the world through motor movements. Motor movements underlie so much of our functioning, our speech, the opening and closing of our eyes, and the beating of our hearts. We’ll learn about areas of the brain involved in movement and some of its pathways.
This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information. We will also learn about physiological phenomena of the brain such as synchronicity that gives rise to brain waves.
This lecture covers the ethical implications of the use of pharmaceuticals to enhance brain functions and was part of the Neuro Day Workshop held by the NeuroSchool of Aix Marseille University.
Computational models provide a framework for integrating data across spatial scales and for exploring hypotheses about the biological mechanisms underlying neuronal and network dynamics. However, as models increase in complexity, additional barriers emerge to the creation, exchange, and re-use of models. Successful projects have created standards for describing complex models in neuroscience and provide open source tools to address these issues. This lecture provides an overview of these projects and make a case for expanded use of resources in support of reproducibility and validation of models against experimental data.
Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Neuroethics has been described as containing at least two components - the neuroscience of ethics and the ethics of neuroscience. The first involves neuroscientific theories, research, and neuro-imaging focused on how the brain arrives at moral decisions and actions, which challenge existing descriptive theories of how humans develop moral thinking and make ethical decisions. The second, ethics of neuroscience, involves applying normative theories about what is right, good and fair to ethical questions raised by neuroscientific research and new technologies, such as how to balance the public benefit of “big data” neuroscience while protecting individual privacy and norms of informed consent.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. The discourse on ethics and computing can be traced back to Norbert Wiener and the very beginning of digital computing. From the 1970s and 80s it has developed into an active discussion involving academics from various disciplines, professional bodies and industry.
Like any transformative technology, intelligent robotics has the potential for huge benefit, but is not without ethical or societal risk. In this lecture, I will explore two questions. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or war fighting for example? When intelligent autonomous robots make mistakes, as they inevitably will, who should be held to account? Secondly, I will consider the longer-term question of whether intelligent robots themselves could or should be ethical. Seventy years ago Isaac Asimov created his fictional Three Laws of Robotics. Is there now a realistic prospect that we could build a robot that is Three Laws Safe?
In the face of perceived public concerns about technological innovations, leading national and international bodies increasingly argue that there must be ‘dialogue' between policy makers, scientific researchers, civil society organizations and members of the public, to shape the pathways of technology development in a way that meets societal needs and gains public trust. This is not new, of course, and such concerns go back at least to the debates over the development of nuclear technologies and campaigns for social responsibility in science. Major funding bodies in the UK, Europe and elsewhere are now addressing this issue by insisting on Responsible Research and Innovation (RRI) in the development of emerging technology. Biotechnologies such as synthetic biology and neurotechnologies have become a particular focus of RRI, partly because of the belief that these are risky technologies involving tinkering with the very building blocks of life, and perhaps even with human nature. With my fellow researchers, I have been involved in trying to develop Responsible Research and Innovation in these technologies for several years.
In this lecture, I consider some of the key social and ethical issues raised by the ‘big brain projects’ currently under way in Europe, the USA, China, Japan and many other regions. I will draw upon our own experience in the ‘ Foresight Lab’ of the HBP to discuss the ways in which these can usefully be approached from the perspective of responsible research and innovation and the AREA approach - anticipation, reflection, engagement and action. These include data protection, privacy and data governance; the search for ‘neural signatures’ of psychaitric and neurological disorders; ‘dual use’ or the military use of developments initially intended for clinical and civilian purposes; brain-computer interfaces and neural prosthetics; and the use of animals in brain research. Following a brief discussion of the challenges of translation from the lab to the real world, I will conclude by arguing that success in contemporary scientific research and innovation is best assured by openness, collaboration, sharing with fellow researchers; robust systems of data governance involving lay persons; frankness about realities of scientific research and innovation with fellow citizens; realism about complexities of links between researchers, publics and private enterprise; and understanding and engaging with the realities of science today in the real world.
The UK Royal Society in its 2012 study of Neuroscience, conflict and security had as its first recommendation that: “There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) among neuroscientists at an early stage of their training.” There can be little doubt that the need to raise awareness of this challenge remains among practicing neuroscientists today. This lecture aims to give an introduction and overview of the dual-use challenge as it applies to neuroscience today and will apply in coming decades.