This lesson discusses FAIR principles and methods currently in development for assessing FAIRness.
This lecture covers a wide range of aspects regarding neuroinformatics and data governance, describing both their historical developments and current trajectories. Particular tools, platforms, and standards to make your research more FAIR are also discussed.
This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.
This lecture discusses how FAIR practices affect personalized data models, including workflows, challenges, and how to improve these practices.
In this talk, you will learn how brainlife.io works, and how it can be applied to neuroscience data.
As a part of NeuroHackademy 2020, this lecture delves into cloud computing, focusing on Amazon Web Services.
This talk presents an overview of CBRAIN, a web-based platform that allows neuroscientists to perform computationally intensive data analyses by connecting them to high-performance computing facilities across Canada and around the world.
This lecture describes how to build research workflows, including a demonstrate using DataJoint Elements to build data pipelines.
This lesson gives an introductory presentation on how data science can help with scientific reproducibility.
This lecture covers how to make modeling workflows FAIR by working through a practical example, dissecting the steps within the workflow, and detailing the tools and resources used at each step.
This lecture covers a lot of post-war developments in the science of the mind, focusing first on the cognitive revolution, and concluding with living machines.
This lesson aims to define computational neuroscience in general terms, while providing specific examples of highly successful computational neuroscience projects.
This lesson delves into the the structure of one of the brain's most elemental computational units, the neuron, and how said structure influences computational neural network models.
In this lesson you will learn how machine learners and neuroscientists construct abstract computational models based on various neurophysiological signalling properties.
This lesson describes spike timing-dependent plasticity (STDP), a biological process that adjusts the strength of connections between neurons in the brain, and how one can implement or mimic this process in a computational model. You will also find links for practical exercises at the bottom of this page.
In this lesson, you will learn about some of the many methods to train spiking neural networks (SNNs) with either no attempt to use gradients, or only use gradients in a limited or constrained way.
In this lesson, you will learn how to train spiking neural networks (SNNs) with a surrogate gradient method.
In this lesson, you will hear about some of the open issues in the field of neuroscience, as well as a discussion about whether neuroscience works, and how can we know?
This lecture covers an Introduction to neuron anatomy and signaling, and different types of models, including the Hodgkin-Huxley model.
This lesson gives an introduction to simple spiking neuron models.