This talk enumerates the challenges regarding data accessibility and reusability inherent in the current scientific publication system, and discusses novel approaches to these challenges, such as the EBRAINS Live Papers platform.
This brief video gives an introduction to the eighth session of INCF's Neuroinformatics Assembly 2023, focusing on FAIR data and the role of academic journals.
This talk gives an overview of the perspectives and FAIR-aligned policies of the academic journal Public Library of Science, better known as PLOS. This journal is a nonprofit, open access publisher empowering researchers to accelerate progress in science.
This talk highlights a set of platform technologies, software, and data collections that close and shorten the feedback cycle in research.
This lecture provides an introduction to the course "Cognitive Science & Psychology: Mind, Brain, and Behavior".
This lesson covers the history of neuroscience and machine learning, and the story of how these two seemingly disparate fields are increasingly merging.
In this lesson you will learn how machine learners and neuroscientists construct abstract computational models based on various neurophysiological signalling properties.
In this lesson, you will learn about some typical neuronal models employed by machine learners and computational neuroscientists, meant to imitate the biophysical properties of real neurons.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson goes over some examples of how machine learners and computational neuroscientists go about designing and building neural network models inspired by biological brain systems.
In this lesson, you will learn about different approaches to modeling learning in neural networks, particularly focusing on system parameters such as firing rates and synaptic weights impact a network.
In this lesson, you will learn more about some of the issues inherent in modeling neural spikes, approaches to ameliorate these problems, and the pros and cons of these approaches.
In this lesson, you will learn about some of the many methods to train spiking neural networks (SNNs) with either no attempt to use gradients, or only use gradients in a limited or constrained way.
In this lesson, you will learn how to train spiking neural networks (SNNs) with a surrogate gradient method.
This lesson explores how researchers try to understand neural networks, particularly in the case of observing neural activity.
In this lesson you will learn about the motivation behind manipulating neural activity, and what forms that may take in various experimental designs.
This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.
This lecture focuses on the structured validation process within computational neuroscience, including the tools, services, and methods involved in simulation and analysis.
This module explains how neurons come together to create the networks that give rise to our thoughts. The totality of our neurons and their connection is called our connectome. Learn how this connectome changes as we learn, and computes information.
This lecture covers the NIDM data format within BIDS to make your datasets more searchable, and how to optimize your dataset searches.