This tutorial demonstrates how to work with neuronal data using MATLAB, including actional potentials and spike counts, orientation tuing curves in visual cortex, and spatial maps of firing rates.
This lesson instructs users on how to import electrophysiological neural data into MATLAB, as well as how to convert spikes to a data matrix.
This talk gives an overview of the Human Brain Project, a 10-year endeavour putting in place a cutting-edge research infrastructure that will allow scientific and industrial researchers to advance our knowledge in the fields of neuroscience, computing, and brain-related medicine.
This lecture gives an introduction to the European Academy of Neurology, its recent achievements and ambitions.
This lecture discusses the the importance and need for data sharing in clinical neuroscience.
This lecture gives insights into the Medical Informatics Platform's current and future data privacy model.
This lecture gives an overview on the European Health Dataspace.
This lecture presents the Medical Informatics Platform's data federation in epilepsy.
This lesson is a general overview of overarching concepts in neuroinformatics research, with a particular focus on clinical approaches to defining, measuring, studying, diagnosing, and treating various brain disorders. Also described are the complex, multi-level nature of brain disorders and the data associated with them, from genes and individual cells up to cortical microcircuits and whole-brain network dynamics. Given the heterogeneity of brain disorders and their underlying mechanisms, this lesson lays out a case for multiscale neuroscience data integration.
This lesson describes the fundamentals of genomics, from central dogma to design and implementation of GWAS, to the computation, analysis, and interpretation of polygenic risk scores.
This lesson is an overview of transcriptomics, from fundamental concepts of the central dogma and RNA sequencing at the single-cell level, to how genetic expression underlies diversity in cell phenotypes.
This lesson explains the fundamental principles of neuronal communication, such as neuronal spiking, membrane potentials, and cellular excitability, and how these electrophysiological features of the brain may be modelled and simulated digitally.
This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This lecture on modules and architectures is part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Data Science or a Graduate Level Machine Learning.
This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.
This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.