Skip to main content

Recurrent neural networks, vanilla and gated (LSTM)

Difficulty level
Intermediate
Duration
1:05:36

This lecture covers the concept of recurrent neural networks: vanilla and gated (LSTM) and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Topics covered in this lesson

Chapters

00:00 – Good morning
00:22 – How to summarise papers (as @y0b1byte) with Notion
05:05 – Why do we need to go to a higher hidden dimension?
11:03 – Today class: recurrent neural nets
16:12 – Vector to sequence (vec2seq)
23:01 – Sequence to vector (seq2vec)
27:41 – Sequence to vector to sequence (seq2vec2seq)
35:27 – Sequence to sequence (seq2seq)
38:35 – Training a recurrent network: back propagation through time
47:51 – Training example: language model
51:06 – Vanishing & exploding gradients and gating mechanism
53:32 – The Long Short-Term Memory (LSTM)
57:34 – Jupyter Notebook and PyTorch in action: sequence classification
1:04:46 – Inspecting the activation values
1:05:00 – Closing remarks