Hardware for computing for non-ICT specialists
Computer arithmetic is necessarily performed using approximations to the real numbers they are intended to represent, and consequently it is possible for the discrepancies between the actual solution and the approximate solutions to diverge, i.e. to become increasingly different. This lecture focuses on how this happens and techniques for reducing the effects of these phenomena and discuss systems which are chaotic.
This lecture will addresses what it means for a problem to have a computable solution, methods for combining computability results to analyse more complicated problems, and finally look in detail at one particular problem which has no computable solution: the halting problem.
This lecture focuses on computational complexity which lies at the heart of computer science thinking. In short, it is a way to quickly gauge an approximation to the computational resource required to perform a task. Methods to analyse a computer program and to perform the approximation are presented. Speaker: David Lester.
JupyterHub is a simple, highly extensible, multi-user system for managing per-user Jupyter Notebook servers, designed for research groups or classes. This lecture covers deploying JupyterHub on a single server, as well as deploying with Docker using GitHub for authentication.
The Virtual Brain is an open-source, multi-scale, multi-modal brain simulation platform. In this lesson, you get introduced to brain simulation in general and to The Virtual brain in particular. Prof. Ritter will present the newest approaches for clinical applications of The Virtual brain - that is, for stroke, epilepsy, brain tumors and Alzheimer’s disease - and show how brain simulation can improve diagnostics, therapy and understanding of neurological disease.
The concept of neural masses, an application of mean field theory, is introduced as a possible surrogate for electrophysiological signals in brain simulation. The mathematics of neural mass models and their integration to a coupled network are explained. Bifurcation analysis is presented as an important technique in the understanding of non-linear systems and as a fundamental method in the design of brain simulations. Finally, the application of the described mathematics is demonstrated in the exploration of brain stimulation regimes.
The simulation of the virtual epileptic patient is presented as an example of advanced brain simulation as a translational approach to deliver improved results in clinics. The fundamentals of epilepsy are explained. On this basis, the concept of epilepsy simulation is developed. By using an iPython notebook, the detailed process of this approach is explained step by step. In the end, you are able to perform simple epilepsy simulations your own.
The practical usage of The Virtual brain in its graphical user interface and via python scripts is introduced. In the graphical user interface, you are guided through its data repository, simulator, phase plane exploration tool, connectivity editor, stimulus generator and the provided analyses. The implemented iPython notebooks of TVB are presented, and since they are public, can be used for further exploration of The Virtual brain.
A brief overview of the Python programming language, with an emphasis on tools relevant to data scientists. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute.
Research Resource Identifiers (RRIDs) are ID numbers assigned to help researchers cite key resources (antibodies, model organisms and software projects) in the biomedical literature to improve transparency of research methods.
The Brain Imaging Data Structure (BIDS) is a standard prescribing a formal way to name and organize MRI data and metadata in a file system that simplifies communication and collaboration between users and enables easier data validation and software development through using consistent paths and naming for data files.
Neurodata Without Borders (NWB) is a data standard for neurophysiology that provides neuroscientists with a common standard to share, archive, use, and build common analysis tools for neurophysiology data.
The Neuroimaging Data Model (NIDM) is a collection of specification documents that define extensions the W3C PROV standard for the domain of human brain mapping. NIDM uses provenance information as means to link components from different stages of the scientific research process from dataset descriptors and computational workflow, to derived data and publication.
Neuroscience Information Exchange (NIX) Format data model allows storing fully annotated scientific datasets, i.e. the data together with rich metadata and their relations in a consistent, comprehensive format. Its aim is to achieve standardization by providing a common data structure and APIs for a multitude of data types and use cases, focused on but not limited to neuroscience. In contrast to most other approaches, the NIX approach is to achieve this flexibility with a minimum set of data model elements.
Computational models provide a framework for integrating data across spatial scales and for exploring hypotheses about the biological mechanisms underlying neuronal and network dynamics. However, as models increase in complexity, additional barriers emerge to the creation, exchange, and re-use of models. Successful projects have created standards for describing complex models in neuroscience and provide open source tools to address these issues. This lecture provides an overview of these projects and make a case for expanded use of resources in support of reproducibility and validation of models against experimental data.
KnowledgeSpace is a community-based encyclopedia that links brain research concepts to data, models, and literature. It provides users with access to anatomy, gene expression, models, morphology, and physiology data from over 15 different neuroscience data/model repositories, such as Allen Institute for Brain Science and the Human Brain Project.