This lesson is a general overview of overarching concepts in neuroinformatics research, with a particular focus on clinical approaches to defining, measuring, studying, diagnosing, and treating various brain disorders. Also described are the complex, multi-level nature of brain disorders and the data associated with them, from genes and individual cells up to cortical microcircuits and whole-brain network dynamics. Given the heterogeneity of brain disorders and their underlying mechanisms, this lesson lays out a case for multiscale neuroscience data integration.
This lesson describes the fundamentals of genomics, from central dogma to design and implementation of GWAS, to the computation, analysis, and interpretation of polygenic risk scores.
This lesson is an overview of transcriptomics, from fundamental concepts of the central dogma and RNA sequencing at the single-cell level, to how genetic expression underlies diversity in cell phenotypes.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs.
This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.
In this final lecture of the INCF Short Course: Introduction to Neuroinformatics, you will hear about new advances in the application of machine learning methods to clinical neuroscience data. In particular, this talk discusses the performance of SynthSeg, an image segmentation tool for automated analysis of highly heterogeneous brain MRI clinical scans.
This lesson characterizes different types of learning in a neuroscientific and cellular context, and various models employed by researchers to investigate the mechanisms involved.
In this lesson, you will learn about different approaches to modeling learning in neural networks, particularly focusing on system parameters such as firing rates and synaptic weights impact a network.
This lesson describes spike timing-dependent plasticity (STDP), a biological process that adjusts the strength of connections between neurons in the brain, and how one can implement or mimic this process in a computational model. You will also find links for practical exercises at the bottom of this page.
In this lesson, you will learn about some of the many methods to train spiking neural networks (SNNs) with either no attempt to use gradients, or only use gradients in a limited or constrained way.
In this lesson, you will learn how to train spiking neural networks (SNNs) with a surrogate gradient method.
This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.
This lesson provides a brief introduction to the Computational Modeling of Neuronal Plasticity.
In this lesson, you will be introducted to a type of neuronal model known as the leaky integrate-and-fire (LIF) model.
This lesson goes over various potential inputs to neuronal synapses, loci of neural communication.
This lesson describes the how and why behind implementing integration time steps as part of a neuronal model.
In this lesson, you will learn about neural spike trains which can be characterized as having a Poisson distribution.
This lesson covers spike-rate adaptation, the process by which a neuron's firing pattern decays to a low, steady-state frequency during the sustained encoding of a stimulus.