This lesson describes spike timing-dependent plasticity (STDP), a biological process that adjusts the strength of connections between neurons in the brain, and how one can implement or mimic this process in a computational model. You will also find links for practical exercises at the bottom of this page.
This lesson provides a brief introduction to the Computational Modeling of Neuronal Plasticity.
In this lesson, you will be introducted to a type of neuronal model known as the leaky integrate-and-fire (LIF) model.
This lesson goes over various potential inputs to neuronal synapses, loci of neural communication.
This lesson describes the how and why behind implementing integration time steps as part of a neuronal model.
In this lesson, you will learn about neural spike trains which can be characterized as having a Poisson distribution.
This lesson covers spike-rate adaptation, the process by which a neuron's firing pattern decays to a low, steady-state frequency during the sustained encoding of a stimulus.
This lesson provides a brief explanation of how to implement a neuron's refractory period in a computational model.
In this lesson, you will learn a computational description of the process which tunes neuronal connectivity strength, spike-timing-dependent plasticity (STDP).
This lesson reviews theoretical and mathematical descriptions of correlated spike trains.
This lesson investigates the effect of correlated spike trains on spike-timing dependent plasticity (STDP).
This lesson goes over synaptic normalisation, the homeostatic process by which groups of weighted inputs scale up or down their biases.
In this lesson, you will learn about the intrinsic plasticity of single neurons.
This lesson covers short-term facilitation, a process whereby a neuron's synaptic transmission is enhanced for a short (sub-second) period.
This lesson describes short-term depression, a reduction of synaptic information transfer between neurons.
This lesson briefly wraps up the course on Computational Modeling of Neuronal Plasticity.
This lightning talk describes an automated pipline for positron emission tomography (PET) data.
This session introduces the PET-to-BIDS (PET2BIDS) library, a toolkit designed to simplify the conversion and preparation of PET imaging datasets into BIDS-compliant formats. It supports multiple data types and formats (e.g., DICOM, ECAT7+, nifti, JSON), integrates seamlessly with Excel-based metadata, and provides automated routines for metadata updates, blood data conversion, and JSON synchronization. PET2BIDS improves human readability by mapping complex reconstruction names into standardized, descriptive labels and offers extensive documentation, examples, and video tutorials to make adoption easier for researchers.
This session introduces the PET-to-BIDS (PET2BIDS) library, a toolkit designed to simplify the conversion and preparation of PET imaging datasets into BIDS-compliant formats. It supports multiple data types and formats (e.g., DICOM, ECAT7+, nifti, JSON), integrates seamlessly with Excel-based metadata, and provides automated routines for metadata updates, blood data conversion, and JSON synchronization. PET2BIDS improves human readability by mapping complex reconstruction names into standardized, descriptive labels and offers extensive documentation, examples, and video tutorials to make adoption easier for researchers.
This session dives into practical PET tooling on BIDS data—showing how to run motion correction, register PET↔MRI, extract time–activity curves, and generate standardized PET-BIDS derivatives with clear QC reports. It introduces modular BIDS Apps (head-motion correction, TAC extraction), a full pipeline (PETPrep), and a PET/MRI defacer, with guidance on parameters, outputs, provenance, and why Dockerized containers are the reliable way to run them at scale.