The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.
The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.
The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.
A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
NWB: An ecosystem for neurophysiology data standardization
This lecture covers the ethical implications of the use of pharmaceuticals to enhance brain functions and was part of the Neuro Day Workshop held by the NeuroSchool of Aix Marseille University.
Introduction to the FAIR Principles and examples of applications of the FAIR Principles in neuroscience. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Introduction to reproducible research. The lecture provides an overview of the core skills and practical solutions required to practice reproducible research. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Neuroethics has been described as containing at least two components - the neuroscience of ethics and the ethics of neuroscience. The first involves neuroscientific theories, research, and neuro-imaging focused on how the brain arrives at moral decisions and actions, which challenge existing descriptive theories of how humans develop moral thinking and make ethical decisions. The second, ethics of neuroscience, involves applying normative theories about what is right, good and fair to ethical questions raised by neuroscientific research and new technologies, such as how to balance the public benefit of “big data” neuroscience while protecting individual privacy and norms of informed consent.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. The discourse on ethics and computing can be traced back to Norbert Wiener and the very beginning of digital computing. From the 1970s and 80s it has developed into an active discussion involving academics from various disciplines, professional bodies and industry.
Like any transformative technology, intelligent robotics has the potential for huge benefit, but is not without ethical or societal risk. In this lecture, I will explore two questions. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or war fighting for example? When intelligent autonomous robots make mistakes, as they inevitably will, who should be held to account? Secondly, I will consider the longer-term question of whether intelligent robots themselves could or should be ethical. Seventy years ago Isaac Asimov created his fictional Three Laws of Robotics. Is there now a realistic prospect that we could build a robot that is Three Laws Safe?
In the face of perceived public concerns about technological innovations, leading national and international bodies increasingly argue that there must be ‘dialogue' between policy makers, scientific researchers, civil society organizations and members of the public, to shape the pathways of technology development in a way that meets societal needs and gains public trust. This is not new, of course, and such concerns go back at least to the debates over the development of nuclear technologies and campaigns for social responsibility in science. Major funding bodies in the UK, Europe and elsewhere are now addressing this issue by insisting on Responsible Research and Innovation (RRI) in the development of emerging technology. Biotechnologies such as synthetic biology and neurotechnologies have become a particular focus of RRI, partly because of the belief that these are risky technologies involving tinkering with the very building blocks of life, and perhaps even with human nature. With my fellow researchers, I have been involved in trying to develop Responsible Research and Innovation in these technologies for several years.
In this lecture, I consider some of the key social and ethical issues raised by the ‘big brain projects’ currently under way in Europe, the USA, China, Japan and many other regions. I will draw upon our own experience in the ‘ Foresight Lab’ of the HBP to discuss the ways in which these can usefully be approached from the perspective of responsible research and innovation and the AREA approach - anticipation, reflection, engagement and action. These include data protection, privacy and data governance; the search for ‘neural signatures’ of psychaitric and neurological disorders; ‘dual use’ or the military use of developments initially intended for clinical and civilian purposes; brain-computer interfaces and neural prosthetics; and the use of animals in brain research. Following a brief discussion of the challenges of translation from the lab to the real world, I will conclude by arguing that success in contemporary scientific research and innovation is best assured by openness, collaboration, sharing with fellow researchers; robust systems of data governance involving lay persons; frankness about realities of scientific research and innovation with fellow citizens; realism about complexities of links between researchers, publics and private enterprise; and understanding and engaging with the realities of science today in the real world.
The UK Royal Society in its 2012 study of Neuroscience, conflict and security had as its first recommendation that: “There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) among neuroscientists at an early stage of their training.” There can be little doubt that the need to raise awareness of this challenge remains among practicing neuroscientists today. This lecture aims to give an introduction and overview of the dual-use challenge as it applies to neuroscience today and will apply in coming decades.
What is Ethics in biomedical research? In this case the ethics we talk about is how we think we can use animals in biomedical research and what we gain from the experimental setup of experiments. We will talk about “a common set of values” and how 3R engagement can make a difference to experimental procedures and a progress in the positive outcome of experimental procedures and results and scientific papers of the future.
Artificial intelligence (AI) is increasingly affecting almost all areas of life from jobs, healthcare and entertainment to public safety and defense. While advances in AI are associated with new opportunities for economic growth and well-being, they at the same time raise major ethical concerns about AI impact on social equality, transparency and accountability. In recent years, these issues have acquired a prominent role on the agendas of policy-makers around the world. Today the need to facilitate beneficial development of AI and regulate it in the public interest is regularly addressed in speeches of political leaders and policy documents prepared by national governments, international organizations, experts, consulting companies and stakeholders.
A high-level overview of the ethical issues related to data use in such a big, complex and multi-national research initiative as the HBP
Cognitive functions underlie everything we feel, think, and do. It has often been assumed that the cognitive capacities of an individual, whether human or animal, is fixed, either at birth or at maturation. Yet recent studies have demonstrated that cognitive functions can be modified by a wide variety of factors, many of which are controllable. Some of these, including sleep and meditation, are not currently ethically controversial. But others, especially those which make use of advanced technology or unfamiliar drugs, have been challenged on ethical grounds.
Press headlines frequently refer to robots that think like humans, or even have feelings, but is there any basis of truth in such headlines, or are they simply sensationalist hype? Computer scientist EW Dijkstra famously wrote, “the question of whether machines can think is about as relevant as the question of whether submarines can swim”, but the question of robot thought is one that cannot so easily be dismissed. In this talk I will attempt to answer the question “how intelligent are present day intelligent robots?” and describe efforts to design robots that are not only more intelligent but also have a sense of self. But if we should be successful in designing such robots, would they think like animals, or even humans? And what are the realistic prospects for future (sentient) robots as smart as humans?
This lecture provides guidance on the ethical considerations the clinical neuroimaging community faces when applying the FAIR principles to their research. This lecture was part of the FAIR approaches for neuroimaging research session at the 2020 INCF Assembly.