This talk describes the NIH-funded SPARC Data Structure, and how this project navigates ontology development while keeping in mind the FAIR science principles.
This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology.
This lesson continues from part one of the lecture Ontologies, Databases, and Standards, diving deeper into a description of ontologies and knowledg graphs.
This lecture covers structured data, databases, federating neuroscience-relevant databases, and ontologies.
This lecture covers FAIR atlases, including their background and construction, as well as how they can be created in line with the FAIR principles.
This lecture focuses on ontologies for clinical neurosciences.
This lecture covers the NIDM data format within BIDS to make your datasets more searchable, and how to optimize your dataset searches.
This lecture covers positron emission tomography (PET) imaging and the Brain Imaging Data Structure (BIDS), and how they work together within the PET-BIDS standard to make neuroscience more open and FAIR.
This lecture discusses the FAIR principles as they apply to electrophysiology data and metadata, the building blocks for community tools and standards, platforms and grassroots initiatives, and the challenges therein.
This lecture discusses how to standardize electrophysiology data organization to move towards being more FAIR.
The International Brain Initiative (IBI) is a consortium of the world’s major large-scale brain initiatives and other organizations with a vested interest in catalyzing and advancing neuroscience research through international collaboration and knowledge sharing. This workshop introduces the IBI, the efforts of the Data Standards and Sharing Working Group, and keynote lectures on the impact of data standards and sharing on large-scale brain projects, as well as a discussion on prospects and needs for neural data sharing.
This is the Introductory Module to the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition.
This module covers the concepts of gradient descent and the backpropagation algorithm and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers concepts associated with neural nets, including rotation and squashing, and is a part of the Deep Learning Course at New York University's Center for Data Science (CDS).
This lesson provides a detailed description of some of the modules and architectures involved in the development of neural networks.
This lecture covers the concept of neural nets training (tools, classification with neural nets, and PyTorch implementation) and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of convolutional nets in practice and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture discusses the concept of natural signals properties and the convolutional nets in practice and is a part of the Deep Learning Course at NYU's Center for Data Science.
This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at NYU's Center for Data Science.