This lecture covers modeling the neuron in silicon, modeling vision and audition and sensory fusion using a deep network.
Presentation of past and present neurocomputing approaches and hybrid analog/digital circuits that directly emulate the properties of neurons and synapses.
Presentation of the Brian neural simulator, where models are defined directly by their mathematical equations and code is automatically generated for each specific target.
The lecture covers a brief introduction to neuromorphic engineering, some of the neuromorphic networks that the speaker has developed, and their potential applications, particularly in machine learning.
This lecture covers structured data, databases, federating neuroscience-relevant databases, ontologies.
The "connectome" is a term, coined in the past decade, that has been used to describe more than one phenomenon in neuroscience. This lecture explains the basics of structural connections at the micro-, meso- and macroscopic scales.
Neuroethics has been described as containing at least two components - the neuroscience of ethics and the ethics of neuroscience. The first involves neuroscientific theories, research, and neuro-imaging focused on how the brain arrives at moral decisions and actions, which challenge existing descriptive theories of how humans develop moral thinking and make ethical decisions. The second, ethics of neuroscience, involves applying normative theories about what is right, good and fair to ethical questions raised by neuroscientific research and new technologies, such as how to balance the public benefit of “big data” neuroscience while protecting individual privacy and norms of informed consent.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. The discourse on ethics and computing can be traced back to Norbert Wiener and the very beginning of digital computing. From the 1970s and 80s it has developed into an active discussion involving academics from various disciplines, professional bodies and industry.
Like any transformative technology, intelligent robotics has the potential for huge benefit, but is not without ethical or societal risk. In this lecture, I will explore two questions. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or war fighting for example? When intelligent autonomous robots make mistakes, as they inevitably will, who should be held to account? Secondly, I will consider the longer-term question of whether intelligent robots themselves could or should be ethical. Seventy years ago Isaac Asimov created his fictional Three Laws of Robotics. Is there now a realistic prospect that we could build a robot that is Three Laws Safe?
In the face of perceived public concerns about technological innovations, leading national and international bodies increasingly argue that there must be ‘dialogue' between policy makers, scientific researchers, civil society organizations and members of the public, to shape the pathways of technology development in a way that meets societal needs and gains public trust. This is not new, of course, and such concerns go back at least to the debates over the development of nuclear technologies and campaigns for social responsibility in science. Major funding bodies in the UK, Europe and elsewhere are now addressing this issue by insisting on Responsible Research and Innovation (RRI) in the development of emerging technology. Biotechnologies such as synthetic biology and neurotechnologies have become a particular focus of RRI, partly because of the belief that these are risky technologies involving tinkering with the very building blocks of life, and perhaps even with human nature. With my fellow researchers, I have been involved in trying to develop Responsible Research and Innovation in these technologies for several years.
In this lecture, I consider some of the key social and ethical issues raised by the ‘big brain projects’ currently under way in Europe, the USA, China, Japan and many other regions. I will draw upon our own experience in the ‘ Foresight Lab’ of the HBP to discuss the ways in which these can usefully be approached from the perspective of responsible research and innovation and the AREA approach - anticipation, reflection, engagement and action. These include data protection, privacy and data governance; the search for ‘neural signatures’ of psychaitric and neurological disorders; ‘dual use’ or the military use of developments initially intended for clinical and civilian purposes; brain-computer interfaces and neural prosthetics; and the use of animals in brain research. Following a brief discussion of the challenges of translation from the lab to the real world, I will conclude by arguing that success in contemporary scientific research and innovation is best assured by openness, collaboration, sharing with fellow researchers; robust systems of data governance involving lay persons; frankness about realities of scientific research and innovation with fellow citizens; realism about complexities of links between researchers, publics and private enterprise; and understanding and engaging with the realities of science today in the real world.
The UK Royal Society in its 2012 study of Neuroscience, conflict and security had as its first recommendation that: “There needs to be fresh effort by the appropriate professional bodies to inculcate the awareness of the dual-use challenge (i.e., knowledge and technologies used for beneficial purposes can also be misused for harmful purposes) among neuroscientists at an early stage of their training.” There can be little doubt that the need to raise awareness of this challenge remains among practicing neuroscientists today. This lecture aims to give an introduction and overview of the dual-use challenge as it applies to neuroscience today and will apply in coming decades.
What is Ethics in biomedical research? In this case the ethics we talk about is how we think we can use animals in biomedical research and what we gain from the experimental setup of experiments. We will talk about “a common set of values” and how 3R engagement can make a difference to experimental procedures and a progress in the positive outcome of experimental procedures and results and scientific papers of the future.
In the European Union (EU), animals are recognized to have an intrinsic value that must be respected. Since 1986, the EU provides specific legislation to protect the use of animals for scientific purposes. The Directive was extensively updated in 2010, with the aim to strengthen legislation, improve the welfare of those animals still needed in scientific procedures, as well as to firmly adopt the principle of the 3Rs (Replacement, Reduction and Refinement). The Directive 2010/63 is widely recognized as the world’s most stringent and progressive legal framework for protecting animals used in scientific procedures. According to the Eurobarometer of March 2016 about attitudes of Europeans towards animal welfare, it is clearly recognized that animal welfare and animal protection are very important issues for European citizens.
Research integrity has become an increasingly important aspect of modern research. Problems such as the reproducibility crisis and fierce pressure on academics to succeed motivate organisations at all levels to engage with initiatives that support good research conduct. But what is research integrity? How does it differ from ethics?
Cognitive functions underlie everything we feel, think, and do. It has often been assumed that the cognitive capacities of an individual, whether human or animal, is fixed, either at birth or at maturation. Yet recent studies have demonstrated that cognitive functions can be modified by a wide variety of factors, many of which are controllable. Some of these, including sleep and meditation, are not currently ethically controversial. But others, especially those which make use of advanced technology or unfamiliar drugs, have been challenged on ethical grounds.
Press headlines frequently refer to robots that think like humans, or even have feelings, but is there any basis of truth in such headlines, or are they simply sensationalist hype? Computer scientist EW Dijkstra famously wrote, “the question of whether machines can think is about as relevant as the question of whether submarines can swim”, but the question of robot thought is one that cannot so easily be dismissed. In this talk I will attempt to answer the question “how intelligent are present day intelligent robots?” and describe efforts to design robots that are not only more intelligent but also have a sense of self. But if we should be successful in designing such robots, would they think like animals, or even humans? And what are the realistic prospects for future (sentient) robots as smart as humans?