This session provides users with an introduction to tools and resources that facilitate the implementation of FAIR in their research.
This video gives a short introduction to the EBRAINS data sharing platform, why it was developed, and how it contributes to open data sharing.
This video explains what metadata is, why it is important, and how you can organize your metadata to increase the FAIRness of your data on EBRAINS.
This video introduces the importance of writing a Data Descriptor to accompany your dataset on EBRAINS. It gives concrete examples on what information to include and highlights how this makes your data more FAIR.
KnowledgeSpace (KS) is a data discoverability portal and neuroscience encyclopedia that was developed to make it easier for the neuroscience community to find publicly available datasets that adhere to the FAIR Principles and to provide an integrated view of neuroscience concepts found in Wikipedia and NeuroLex linked with PubMed and 17 of the world's leading neuroscience repositories. In short, KS provides a single point of entry where reseaerchers can search for a neuroscience concept of interest and receive results that include: i. a description of the term found in Wikipedia/NeuroLex, ii. links to publicly available datasets related to the concept of interest, and iii. up-to-date references that support the concept of interests found in PubMed. APIs are available so that developers of other neuroscience research infrastructures can integrate KS components in their infrastructures. If your repository or your favorite repository is not indexed in KS, please contact us.
In this lesson, users will learn about the importance of proper citation of software resources and tools used in neuroscientific research.
Since their introduction in 2016, the FAIR data principles have gained increasing recognition and adoption in global neuroscience. FAIR defines a set of high level principles and practices for making digital objects, including data, software and workflows, Findable, Accessible, Interoperable and Reusable. But FAIR is not a specification; it leaves many of the specifics up to individual scientific disciplines to define. INCF has been leading the way in promoting, defining and implementing FAIR data practices for neuroscience. We have been bringing together researchers, infrastructure providers, industry and publishers through our programs and networks.
This lecture provides an introduction to the course "Cognitive Science & Psychology: Mind, Brain, and Behavior".
This lesson covers the history of neuroscience and machine learning, and the story of how these two seemingly disparate fields are increasingly merging.
In this lesson you will learn how machine learners and neuroscientists construct abstract computational models based on various neurophysiological signalling properties.
In this lesson, you will learn about some typical neuronal models employed by machine learners and computational neuroscientists, meant to imitate the biophysical properties of real neurons.
Whereas the previous two lessons described the biophysical and signalling properties of individual neurons, this lesson describes properties of those units when part of larger networks.
This lesson goes over some examples of how machine learners and computational neuroscientists go about designing and building neural network models inspired by biological brain systems.
In this lesson, you will learn about different approaches to modeling learning in neural networks, particularly focusing on system parameters such as firing rates and synaptic weights impact a network.
In this lesson, you will learn more about some of the issues inherent in modeling neural spikes, approaches to ameliorate these problems, and the pros and cons of these approaches.
In this lesson, you will learn about some of the many methods to train spiking neural networks (SNNs) with either no attempt to use gradients, or only use gradients in a limited or constrained way.
In this lesson, you will learn how to train spiking neural networks (SNNs) with a surrogate gradient method.
This lesson explores how researchers try to understand neural networks, particularly in the case of observing neural activity.
In this lesson you will learn about the motivation behind manipulating neural activity, and what forms that may take in various experimental designs.
This video briefly goes over the exercises accompanying Week 6 of the Neuroscience for Machine Learners (Neuro4ML) course, Understanding Neural Networks.