This lecture introduces neuroscience concepts and methods such as fMRI, visual respones in BOLD data, and the eccentricity of visual receptive fields.
In this tutorial, users learn how to compute and visualize a t-test on experimental condition differences.
This lesson continues with the second workshop on reproducible science, focusing on additional open source tools for researchers and data scientists, such as the R programming language for data science, as well as associated tools like RStudio and R Markdown. Additionally, users are introduced to Python and iPython notebooks, Google Colab, and are given hands-on tutorials on how to create a Binder environment, as well as various containers in Docker and Singularity.
This is a hands-on tutorial on PLINK, the open source whole genome association analysis toolset. The aims of this tutorial are to teach users how to perform basic quality control on genetic datasets, as well as to identify and understand GWAS summary statistics.
This video will document how to run a correlation analysis between the gray matter volume of two different structures using the output from brainlife app-freesurfer-stats.
As the previous lesson of this course described how researchers acquire neural data, this lesson will discuss how to go about interpreting and analysing the data.
In this lesson, you will learn about one particular aspect of decision making: reaction times. In other words, how long does it take to take a decision based on a stream of information arriving continuously over time?
This talks discusses data sharing in the context of dementia. It explains why data sharing in dementia is important, how data is usually shared in the field and illustrates two examples: the Netherlands Consortium Dementia cohorts and the European Platform for Neurodegenerative Diseases.
The Medical Informatics Platform (MIP) Dementia had been installed in several memory clinics across Europe allowing them to federate their real-world databases. Research open access databases had also been integrated such as ADNI (Alzheimer’s Dementia Neuroimaging Initiative), reaching a cumulative case load of more than 5,000 patients (major cognitive disorder due to Alzheimer’s disease, other major cognitive disorder, minor cognitive disorder, controls). The statistic and machine learning tools implemented in the MIP allowed researchers to conduct easily federated analyses among Italian memory clinics (Redolfi et al. 2020) and also across borders between the French (Lille), the Swiss (Lausanne) and the Italian (Brescia) datasets.
The number of patients with dementia is estimated to increase given the aging population. This will lead to a number of challenges in the future in terms of diagnosis and care for patients with dementia. To meet these needs such as early diagnsosis and development of prognostic biomarkers, large datasets, such as the federated datasets on dementia. The EAN Dementia and cognitive disorders scientific panel can play an important role as coordinator and connecting panel members who wish to participate in e.g. consortia.
This lesson gives an in-depth introduction of ethics in the field of artificial intelligence, particularly in the context of its impact on humans and public interest. As the healthcare sector becomes increasingly affected by the implementation of ever stronger AI algorithms, this lecture covers key interests which must be protected going forward, including privacy, consent, human autonomy, inclusiveness, and equity.
This lesson describes a definitional framework for fairness and health equity in the age of the algorithm. While acknowledging the impressive capability of machine learning to positively affect health equity, this talk outlines potential (and actual) pitfalls which come with such powerful tools, ultimately making the case for collaborative, interdisciplinary, and transparent science as a way to operationalize fairness in health equity.
This lesson is the first part of a three-part series on the development of neuroinformatic infrastructure to ensure compliance with European data privacy standards and laws.
This is the second of three lectures around current challenges and opportunities facing neuroinformatic infrastructure for handling sensitive data.
This lesson contains the first part of the lecture Data Science and Reproducibility. You will learn about the development of data science and what the term currently encompasses, as well as how neuroscience and data science intersect.
This lecture gives a tour of what neuroethics is and how it applies to neuroscience and neurotechnology, while also addressing justice concerns within both fields.
This lecture presents selected theories of ethics as applied to questions raised by the Human Brain Project.
The HBP as an ICT flagship project crucially relies on ICT and will contribute important input into the development of new computing principles and artefacts. Individuals working on the HBP should therefore be aware of the long history of ethical issues discussed in computing. This lessson provides an overview of the most widely discussed ethical issues in computing and demonstrate that privacy and data protection are by no means the only issue worth worrying about.
This lecture explores two questions regarding the ethics of robot development and use. Firstly, the increasingly urgent question of the ethical use of robots: are there particular applications of robots that should be proscribed, in eldercare, or surveillance, or combat? Secondly, the talk deals with the longer-term question of whether intelligent robots themselves could or should be ethical.
In this lesson, attendees will learn about the challenges involved in working with life scientists to enhance their capacity for understanding, and taking responsibility for, the social implications of their research.