This tutorial illustrates several ways to approach predictive modeling and machine learning with MATLAB.
This brief tutorial goes over how you can easily work with big data as you would with any size of data.
In this tutorial, you will learn how to deploy your models outside of your local MATLAB environment, enabling wider sharing and collaboration.
This lesson provides a brief overview of the Python programming language, with an emphasis on tools relevant to data scientists.
This tutorial provides instruction on how to interact with and leverage Python packages to get the most out of Python's suite of available tools for the manipulation, management, analysis, and visualization of neuroscientific data.
This tutorial teaches users how to use Pandas objects to help store and manipulate various datasets in Python.
In this lesson, users can follow along as a spaghetti script written in MATLAB is turned into understandable and reusable code living happily in a powerful GitHub repository.
This lesson gives a quick walkthrough the Tidyverse, an "opinionated" collection of R packages designed for data science, including the use of readr, dplyr, tidyr, and ggplot2.
This lesson gives a general introduction to the essentials of navigating through a Bash terminal environment. The lesson is based on the Software Carpentries "Introduction to the Shell" and was given in the context of the BrainHack School 2020.
This lesson covers Python applications to data analysis, demonstrating why it has become ubiquitous in data science and neuroscience. The lesson was given in the context of the BrainHack School 2020.
This lesson describes spike timing-dependent plasticity (STDP), a biological process that adjusts the strength of connections between neurons in the brain, and how one can implement or mimic this process in a computational model. You will also find links for practical exercises at the bottom of this page.
This lesson provides a brief introduction to the Computational Modeling of Neuronal Plasticity.
In this lesson, you will be introducted to a type of neuronal model known as the leaky integrate-and-fire (LIF) model.
This lesson goes over various potential inputs to neuronal synapses, loci of neural communication.
This lesson describes the how and why behind implementing integration time steps as part of a neuronal model.
In this lesson, you will learn about neural spike trains which can be characterized as having a Poisson distribution.
This lesson covers spike-rate adaptation, the process by which a neuron's firing pattern decays to a low, steady-state frequency during the sustained encoding of a stimulus.
This lesson provides a brief explanation of how to implement a neuron's refractory period in a computational model.
In this lesson, you will learn a computational description of the process which tunes neuronal connectivity strength, spike-timing-dependent plasticity (STDP).
This lesson reviews theoretical and mathematical descriptions of correlated spike trains.