This course explores ethical and social issues that have arisen, and continue to arise, from the rapid research development in neuroscience, medicine and ICT. Lectures focus on key ethical issues contained in the HBP – such as ethics of robotics, dual use, ICT ethical issues, big data and individual privacy, and the use of animals in research.
Research, ethics, and societal impact
Neuroethics has been described as containing at least two components - the neuroscience of ethics and the ethics of neuroscience. The first involves neuroscientific theories, research, and neuro-imaging focused on how the brain arrives at moral decisions and actions, which challenge existing descriptive theories of how humans develop moral thinking and make ethical decisions. The second, ethics of neuroscience, involves applying normative theories about what is right, good and fair to ethical questions raised by neuroscientific research and new technologies, such as how to balance the public benefit of “big data” neuroscience while protecting individual privacy and norms of informed consent.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. The discourse on ethics and computing can be traced back to Norbert Wiener and the very beginning of digital computing. From the 1970s and 80s it has developed into an active discussion involving academics from various disciplines, professional bodies and industry.
Like any transformative technology, intelligent robotics has the potential for huge benefit, but is not without ethical or societal risk. In this lecture, I will explore two questions. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or war fighting for example? When intelligent autonomous robots make mistakes, as they inevitably will, who should be held to account? Secondly, I will consider the longer-term question of whether intelligent robots themselves could or should be ethical. Seventy years ago Isaac Asimov created his fictional Three Laws of Robotics. Is there now a realistic prospect that we could build a robot that is Three Laws Safe?
In the face of perceived public concerns about technological innovations, leading national and international bodies increasingly argue that there must be ‘dialogue' between policy makers, scientific researchers, civil society organizations and members of the public, to shape the pathways of technology development in a way that meets societal needs and gains public trust. This is not new, of course, and such concerns go back at least to the debates over the development of nuclear technologies and campaigns for social responsibility in science. Major funding bodies in the UK, Europe and elsewhere are now addressing this issue by insisting on Responsible Research and Innovation (RRI) in the development of emerging technology. Biotechnologies such as synthetic biology and neurotechnologies have become a particular focus of RRI, partly because of the belief that these are risky technologies involving tinkering with the very building blocks of life, and perhaps even with human nature. With my fellow researchers, I have been involved in trying to develop Responsible Research and Innovation in these technologies for several years.
In this lecture, I consider some of the key social and ethical issues raised by the ‘big brain projects’ currently under way in Europe, the USA, China, Japan and many other regions. I will draw upon our own experience in the ‘ Foresight Lab’ of the HBP to discuss the ways in which these can usefully be approached from the perspective of responsible research and innovation and the AREA approach - anticipation, reflection, engagement and action. These include data protection, privacy and data governance; the search for ‘neural signatures’ of psychaitric and neurological disorders; ‘dual use’ or the military use of developments initially intended for clinical and civilian purposes; brain-computer interfaces and neural prosthetics; and the use of animals in brain research. Following a brief discussion of the challenges of translation from the lab to the real world, I will conclude by arguing that success in contemporary scientific research and innovation is best assured by openness, collaboration, sharing with fellow researchers; robust systems of data governance involving lay persons; frankness about realities of scientific research and innovation with fellow citizens; realism about complexities of links between researchers, publics and private enterprise; and understanding and engaging with the realities of science today in the real world.
The UK Royal Society in its 2012 study of Neuroscience, conflict and security had as its first recommendation that: “There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) among neuroscientists at an early stage of their training.” There can be little doubt that the need to raise awareness of this challenge remains among practicing neuroscientists today. This lecture aims to give an introduction and overview of the dual-use challenge as it applies to neuroscience today and will apply in coming decades.
What is Ethics in biomedical research? In this case the ethics we talk about is how we think we can use animals in biomedical research and what we gain from the experimental setup of experiments. We will talk about “a common set of values” and how 3R engagement can make a difference to experimental procedures and a progress in the positive outcome of experimental procedures and results and scientific papers of the future.
Artificial intelligence (AI) is increasingly affecting almost all areas of life from jobs, healthcare and entertainment to public safety and defense. While advances in AI are associated with new opportunities for economic growth and well-being, they at the same time raise major ethical concerns about AI impact on social equality, transparency and accountability. In recent years, these issues have acquired a prominent role on the agendas of policy-makers around the world. Today the need to facilitate beneficial development of AI and regulate it in the public interest is regularly addressed in speeches of political leaders and policy documents prepared by national governments, international organizations, experts, consulting companies and stakeholders.
A high-level overview of the ethical issues related to data use in such a big, complex and multi-national research initiative as the HBP
Cognitive functions underlie everything we feel, think, and do. It has often been assumed that the cognitive capacities of an individual, whether human or animal, is fixed, either at birth or at maturation. Yet recent studies have demonstrated that cognitive functions can be modified by a wide variety of factors, many of which are controllable. Some of these, including sleep and meditation, are not currently ethically controversial. But others, especially those which make use of advanced technology or unfamiliar drugs, have been challenged on ethical grounds.
Press headlines frequently refer to robots that think like humans, or even have feelings, but is there any basis of truth in such headlines, or are they simply sensationalist hype? Computer scientist EW Dijkstra famously wrote, “the question of whether machines can think is about as relevant as the question of whether submarines can swim”, but the question of robot thought is one that cannot so easily be dismissed. In this talk I will attempt to answer the question “how intelligent are present day intelligent robots?” and describe efforts to design robots that are not only more intelligent but also have a sense of self. But if we should be successful in designing such robots, would they think like animals, or even humans? And what are the realistic prospects for future (sentient) robots as smart as humans?
Responsible Research and Innovation (RRI) is an important ethical, legal, and political theme for the European Commission. Although variously defined, it is generally understood as an interactive process that engages social actors, researchers, and innovators who must be mutually responsive and work towards the ethical permissibility of the relevant research and its products. The framework of RRI calls for contextually addressing not just research and innovation impact but also the background research process, especially the societal visions underlying it and the norms and priorities that shape scientific agendas.