In this lesson, while learning about the need for increased large-scale collaborative science that is transparent in nature, users also are given a tutorial on using Synapse for facilitating reusable and reproducible research.
This lecture discusses what defines an integrative approach regarding research and methods, including various study designs and models which are appropriate choices when attempting to bridge data domains; a necessity when whole-person modelling.
Similarity Network Fusion (SNF) is a computational method for data integration across various kinds of measurements, aimed at taking advantage of the common as well as complementary information in different data types. This workshop walks participants through running SNF on EEG and genomic data using RStudio.
This lesson provides an introduction the International Neuroinformatics Coordinating Facility (INCF), its mission towards FAIR neuroscience, and future directions.
This brief video provides an introduction to the third session of INCF's Neuroinformatics Assembly 2023, focusing on how to streamling cross-platform data integration in a neuroscientific context.
This final lesson of the course consists of the panel discussion for Streamlining Cross-Platform Data Integration session during the first day of INCF's Neuroinformatics Assembly 2023.
This lightning talk describes the heterogeneity of the MR field regarding types of scanners, data formats, protocols, and software/hardware versions, as well as the challenges and opportunities for unifying these datasets in a common interface, MRdataset.
This session covers the framework of the International Brain Lab (IBL) and the data architecture used for this project.
This lesson breaks down the principles of Bayesian inference and how it relates to cognitive processes and functions like learning and perception. It is then explained how cognitive models can be built using Bayesian statistics in order to investigate how our brains interface with their environment.
This lesson corresponds to slides 1-64 in the PDF below.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lecture provides an overview of depression (epidemiology and course of the disorder), clinical presentation, somatic co-morbidity, and treatment options.
This lesson is part 1 of 2 of a tutorial on statistical models for neural data.
What is the difference between attention and consciousness? This lecture describes the scientific meaning of consciousness, journeys on the search for neural correlates of visual consciousness, and explores the possibility of consciousness in other beings and even non-biological structures.
This lesson provides an overview of how to construct computational pipelines for neurophysiological data using DataJoint.
This lesson delves into the the structure of one of the brain's most elemental computational units, the neuron, and how said structure influences computational neural network models.
Following the previous lesson on neuronal structure, this lesson discusses neuronal function, particularly focusing on spike triggering and propogation.
This lesson goes over the basic mechanisms of neural synapses, the space between neurons where signals may be transmitted.
While the previous lesson in the Neuro4ML course dealt with the mechanisms involved in individual synapses, this lesson discusses how synapses and their neurons' firing patterns may change over time.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.