Skip to main content

Recurrent and convolutional nets

Difficulty level
Intermediate
Duration
1:59:47

This lecture covers the concept of parameter sharing: recurrent and convolutional nets and is a part of the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Introduction to Deep Learning and Introduction to Data Science or a Graduate Level Machine Learning.

Topics covered in this lesson

Chapters: 

00:00:00 – Welcome to class
00:00:49 – Hypernetworks
00:02:24 – Shared weights
00:06:10 – Parameter sharing ⇒ adding the gradients
00:09:33 – Max and sum reductions
00:11:46 – Recurrent nets
00:14:20 – Unrolling in time
00:16:17 – Vanishing and exploding gradients
00:19:48 – Math on the whiteboard
00:23:18 – RNN tricks
00:24:29 – RNN for differential equations
00:27:18 – GRU 00:28:23 – What is a memory
00:41:26 – LSTM – Long Short-Term Memory net
00:43:11 – Multilayer LSTM
00:46:01 – Attention for sequence to sequence mapping
00:48:41 – Convolutional nets
00:50:50 – Detecting motifs in images
00:56:57 – Convolution definition(s)
00:59:43 – Backprop through convolutions
01:03:42 – Stride and skip: subsampling and convolution “à trous”
01:06:56 – Convolutional net architecture
01:19:08 – Multiple convolutions
01:20:37 – Vintage ConvNets
01:32:32 – How does the brain interpret images?
01:37:18 – Hubel & Wiesel's model of the visual cortex
01:42:51 – Invariance and equivariance of ConvNets
01:49:23 – In the next episode…
01:52:54 – Training time, iteration cycle, and historical remarks