This lecture and tutorial focuses on measuring human functional brain networks, as well as how to account for inherent variability within those networks.
This lecture presents an overview of functional brain parcellations, as well as a set of tutorials on bootstrap agregation of stable clusters (BASC) for fMRI brain parcellation.
Neuronify is an educational tool meant to create intuition for how neurons and neural networks behave. You can use it to combine neurons with different connections, just like the ones we have in our brain, and explore how changes on single cells lead to behavioral changes in important networks. Neuronify is based on an integrate-and-fire model of neurons. This is one of the simplest models of neurons that exist. It focuses on the spike timing of a neuron and ignores the details of the action potential dynamics. These neurons are modeled as simple RC circuits. When the membrane potential is above a certain threshold, a spike is generated and the voltage is reset to its resting potential. This spike then signals other neurons through its synapses.
Neuronify aims to provide a low entry point to simulation-based neuroscience.
Maximize Your Research With Cloud Workspaces is a talk aimed at researchers who are looking for innovative ways to set up and execute their life science data analyses in a collaborative, extensible, open-source cloud environment. This panel discussion is brought to you by MetaCell and scientists from leading universities who share their experiences of advanced analysis and collaborative learning through the Cloud.
In this lecture, you will learn about virtual research environments (VREs) and their technical limitations, (i.e., a computing platform and the software stack behind it) and the security measures which should be considered during implementation.
This lecture goes into detailed description of how to process workflows in the virtual research environment (VRE), including approaches for standardization, metadata, containerization, and constructing and maintaining scientific pipelines.
This lesson provides an overview of how to conceptualize, design, implement, and maintain neuroscientific pipelines in via the cloud-based computational reproducibility platform Code Ocean.
In this workshop talk, you will receive a tour of the Code Ocean ScienceOps Platform, a centralized cloud workspace for all teams.
This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.
This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.
This lecture discusses how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.
In this talk, you will learn how brainlife.io works, and how it can be applied to neuroscience data.
As a part of NeuroHackademy 2020, this lecture delves into cloud computing, focusing on Amazon Web Services.
This talk describes the NIH-funded SPARC Data Structure, and how this project navigates ontology development while keeping in mind the FAIR science principles.
This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology.
This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs.
This lecture covers structured data, databases, federating neuroscience-relevant databases, and ontologies.
This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.
This lecture focuses on ontologies for clinical neurosciences.
This lecture covers the description and characterization of an input-output relationship in a information-theoretic context.