This lecture will highlight our current understanding and recent developments in the field of neurodegenerative disease research, as well as the future of diagnostics and treatment of neurodegenerative diseases
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
An overview of some of the essential concepts in neuropharmacology (e.g. receptor binding, agonism, antagonism), an introduction to pharmacodynamics and pharmacokinetics, and an overview of the drug discovery process relative to diseases of the Central Nervous System.
Audio slides presentation to accompany the paper titled: An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. Authors: M. Schirner, S. Rothmeier, V. Jirsa, A.R. McIntosh, P. Ritter.
This lecture covers an Introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model.
The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.
The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.
The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.
Along the example of a patient with bi-temporal epilepsy, we show step by step how to develop a Virtual Epileptic Patient (VEP) brain model and integrate patient-specific information such as brain connectivity, epileptogenic zone and MRI lesions. The patient's brain network model is then evaluated via simulation, data fitting and mathematical analysis. This lecture demonstrates how to develop novel personalized strategies towards therapy and intervention using TVB.
This lecture focuses on higher-level simulation scenarios using stimulation protocols. We demonstrate how to build stimulation patterns in TVB, and use them in a simulation to induced activity dissipating into experimentally known resting-state networks in human and mouse brain, a well as to obtain EEG recordings reproducing empirical findings of other researchers.
This lecture presents the Graphical (GUI) and Command Line (CLI) User Interface of TVB. Alongside with the speakers, explore and interact with all means necessary to generate, manipulate and visualize connectivity and network dynamics. Speakers: Paula Popa & Mihai Andrei
This lecture briefly introduces The Virtual Brain (TVB), a multi-scale, multi-modal neuroinformatics platform for full brain network simulations using biologically realistic connectivity, as well as its potential neuroscience applications: for example with epilepsy.
This lecture introduces the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components.
Neuroethics has been described as containing at least two components - the neuroscience of ethics and the ethics of neuroscience. The first involves neuroscientific theories, research, and neuro-imaging focused on how the brain arrives at moral decisions and actions, which challenge existing descriptive theories of how humans develop moral thinking and make ethical decisions. The second, ethics of neuroscience, involves applying normative theories about what is right, good and fair to ethical questions raised by neuroscientific research and new technologies, such as how to balance the public benefit of “big data” neuroscience while protecting individual privacy and norms of informed consent.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. The discourse on ethics and computing can be traced back to Norbert Wiener and the very beginning of digital computing. From the 1970s and 80s it has developed into an active discussion involving academics from various disciplines, professional bodies and industry.
Like any transformative technology, intelligent robotics has the potential for huge benefit, but is not without ethical or societal risk. In this lecture, I will explore two questions. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or war fighting for example? When intelligent autonomous robots make mistakes, as they inevitably will, who should be held to account? Secondly, I will consider the longer-term question of whether intelligent robots themselves could or should be ethical. Seventy years ago Isaac Asimov created his fictional Three Laws of Robotics. Is there now a realistic prospect that we could build a robot that is Three Laws Safe?
In the face of perceived public concerns about technological innovations, leading national and international bodies increasingly argue that there must be ‘dialogue' between policy makers, scientific researchers, civil society organizations and members of the public, to shape the pathways of technology development in a way that meets societal needs and gains public trust. This is not new, of course, and such concerns go back at least to the debates over the development of nuclear technologies and campaigns for social responsibility in science. Major funding bodies in the UK, Europe and elsewhere are now addressing this issue by insisting on Responsible Research and Innovation (RRI) in the development of emerging technology. Biotechnologies such as synthetic biology and neurotechnologies have become a particular focus of RRI, partly because of the belief that these are risky technologies involving tinkering with the very building blocks of life, and perhaps even with human nature. With my fellow researchers, I have been involved in trying to develop Responsible Research and Innovation in these technologies for several years.
In this lecture, I consider some of the key social and ethical issues raised by the ‘big brain projects’ currently under way in Europe, the USA, China, Japan and many other regions. I will draw upon our own experience in the ‘ Foresight Lab’ of the HBP to discuss the ways in which these can usefully be approached from the perspective of responsible research and innovation and the AREA approach - anticipation, reflection, engagement and action. These include data protection, privacy and data governance; the search for ‘neural signatures’ of psychaitric and neurological disorders; ‘dual use’ or the military use of developments initially intended for clinical and civilian purposes; brain-computer interfaces and neural prosthetics; and the use of animals in brain research. Following a brief discussion of the challenges of translation from the lab to the real world, I will conclude by arguing that success in contemporary scientific research and innovation is best assured by openness, collaboration, sharing with fellow researchers; robust systems of data governance involving lay persons; frankness about realities of scientific research and innovation with fellow citizens; realism about complexities of links between researchers, publics and private enterprise; and understanding and engaging with the realities of science today in the real world.