In this lecture, you will learn about current methods, approaches, and challenges to studying human neuroanatomy, particularly through the lense of neuroimaging data such as fMRI and diffusion tensor imaging (DTI).
This lesson provides an overview of the current status in the field of neuroscientific ontologies, presenting examples of data organization and standards, particularly from neuroimaging and electrophysiology.
In this final lecture of the INCF Short Course: Introduction to Neuroinformatics, you will hear about new advances in the application of machine learning methods to clinical neuroscience data. In particular, this talk discusses the performance of SynthSeg, an image segmentation tool for automated analysis of highly heterogeneous brain MRI clinical scans.
This lecture provides an introduction to the Brain Imaging Data Structure (BIDS), a standard for organizing human neuroimaging datasets.
In this lesson, you will learn about the Python project Nipype, an open-source, community-developed initiative under the umbrella of NiPy. Nipype provides a uniform interface to existing neuroimaging software and facilitates interaction between these packages within a single workflow.
This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and goes through both the motivations and processes involved in moving your research computing to the cloud.
This lecture gives an overview of how to prepare and preprocess neuroimaging (EEG/MEG) data for use in TVB.
This book was written with the goal of introducing researchers and students in a variety of research fields to the intersection of data science and neuroimaging. This book reflects our own experience of doing research at the intersection of data science and neuroimaging and it is based on our experience working with students and collaborators who come from a variety of backgrounds and have a variety of reasons for wanting to use data science approaches in their work. The tools and ideas that we chose to write about are all tools and ideas that we have used in some way in our own research. Many of them are tools that we use on a daily basis in our work. This was important to us for a few reasons: the first is that we want to teach people things that we ourselves find useful. Second, it allowed us to write the book with a focus on solving specific analysis tasks. For example, in many of the chapters you will see that we walk you through ideas while implementing them in code, and with data. We believe that this is a good way to learn about data analysis, because it provides a connecting thread from scientific questions through the data and its representation to implementing specific answers to these questions. Finally, we find these ideas compelling and fruitful. That’s why we were drawn to them in the first place. We hope that our enthusiasm about the ideas and tools described in this book will be infectious enough to convince the readers of their value.
This Jupyter Book is a series of interactive tutorials about quantitative T1 mapping, powered by qMRLab. Most figures are generated with Plot.ly – you can play with them by hovering your mouse over the data, zooming in (click and drag) and out (double click), moving the sliders, and changing the drop-down options. To view the code that was used to generate the figures in this blog post, hover your cursor in the top left corner of the frame that contains the tutorial and click the checkbox “All cells” in the popup that appears.
Jupyter Lab notebooks of these tutorials are also available through MyBinder, and inline code modification inside the Jupyter Book is provided by Thebelab. For both options, you can modify the code, change the figures, and regenerate the html that was used to create the tutorial below. This Jupyter Book also uses a Script of Scripts (SoS) kernel, allowing us to process the data using qMRLab in MATLAB/Octave and plot the figures with Plot.ly using Python, all within the same Jupyter Notebook.
This lesson presents a simulation software for spatial model neurons and their networks designed primarily for GPUs.
The lecture covers a brief introduction to neuromorphic engineering, some of the neuromorphic networks that the speaker has developed, and their potential applications, particularly in machine learning.
This lecture highlights the importance of correct annotation and assignment of location, and updated atlas resources to avoid errors in navigation and data interpretation.
We are at the exciting technological stage where it has become feasible to represent the anatomy of an entire human brain at the cellular level. This lecture discusses how neuroanatomy in the 21st Century has become an effort towards the virtualization and standardization of brain tissue.
This lecture covers essential features of digital brain models for neuroinformatics, particularly NeuroMaps.
This presentation covers the neuroinformatics tools and techniques used and their relationship to neuroanatomy for the Allen Institute's atlases of the mouse, developing mouse, and mouse connectional atlas.
This lecture discusses the the importance and need for data sharing in clinical neuroscience.
This lecture gives insights into the Medical Informatics Platform's current and future data privacy model.
This lecture gives an overview on the European Health Dataspace.
This is a tutorial on designing a Bayesian inference model to map belief trajectories, with emphasis on gaining familiarity with Hierarchical Gaussian Filters (HGFs).
This lesson corresponds to slides 65-90 of the PDF below.
This tutorial covers the fundamentals of collaborating with Git and GitHub.