This video gives a brief introduction to Neuro4ML's lessons on neuromorphic computing - the use of specialized hardware which either directly mimics brain function or is inspired by some aspect of the way the brain computes.
In this lesson, you will learn in more detail about neuromorphic computing, that is, non-standard computational architectures that mimic some aspect of the way the brain works.
This video provides a very quick introduction to some of the neuromorphic sensing devices, and how they offer unique, low-power applications.
This lecture covers modeling the neuron in silicon, modeling vision and audition, and sensory fusion using a deep network.
This lesson presents a simulation software for spatial model neurons and their networks designed primarily for GPUs.
This lesson gives an overview of past and present neurocomputing approaches and hybrid analog/digital circuits that directly emulate the properties of neurons and synapses.
Presentation of the Brian neural simulator, where models are defined directly by their mathematical equations and code is automatically generated for each specific target.
The lecture covers a brief introduction to neuromorphic engineering, some of the neuromorphic networks that the speaker has developed, and their potential applications, particularly in machine learning.
This is the first of two workshops on reproducibility in science, during which participants are introduced to concepts of FAIR and open science. After discussing the definition of and need for FAIR science, participants are walked through tutorials on installing and using Github and Docker, the powerful, open-source tools for versioning and publishing code and software, respectively.
In this lesson, while learning about the need for increased large-scale collaborative science that is transparent in nature, users also are given a tutorial on using Synapse for facilitating reusable and reproducible research.
This lesson contains the first part of the lecture Data Science and Reproducibility. You will learn about the development of data science and what the term currently encompasses, as well as how neuroscience and data science intersect.
In this second part of the lecture Data Science and Reproducibility, you will learn how to apply the awareness of the intersection between neuroscience and data science (discussed in part one) to an understanding of the current reproducibility crisis in biomedical science and neuroscience.
The lecture provides an overview of the core skills and practical solutions required to practice reproducible research.
This lecture provides an introduction to reproducibility issues within the fields of neuroimaging and fMRI, as well as an overview of tools and resources being developed to alleviate the problem.
This lecture provides a historical perspective on reproducibility in science, as well as the current limitations of neuroimaging studies to date. This lecture also lays out a case for the use of meta-analyses, outlining available resources to conduct such analyses.
This workshop will introduce reproducible workflows and a range of tools along the themes of organisation, documentation, analysis, and dissemination.
This talks discusses data sharing in the context of dementia. It explains why data sharing in dementia is important, how data is usually shared in the field and illustrates two examples: the Netherlands Consortium Dementia cohorts and the European Platform for Neurodegenerative Diseases.
The Medical Informatics Platform (MIP) Dementia had been installed in several memory clinics across Europe allowing them to federate their real-world databases. Research open access databases had also been integrated such as ADNI (Alzheimer’s Dementia Neuroimaging Initiative), reaching a cumulative case load of more than 5,000 patients (major cognitive disorder due to Alzheimer’s disease, other major cognitive disorder, minor cognitive disorder, controls). The statistic and machine learning tools implemented in the MIP allowed researchers to conduct easily federated analyses among Italian memory clinics (Redolfi et al. 2020) and also across borders between the French (Lille), the Swiss (Lausanne) and the Italian (Brescia) datasets.
The number of patients with dementia is estimated to increase given the aging population. This will lead to a number of challenges in the future in terms of diagnosis and care for patients with dementia. To meet these needs such as early diagnsosis and development of prognostic biomarkers, large datasets, such as the federated datasets on dementia. The EAN Dementia and cognitive disorders scientific panel can play an important role as coordinator and connecting panel members who wish to participate in e.g. consortia.