From latent variable energy based models to target prop to autoencoder
This tutorial covers LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder and a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models I, Energy based models II, Energy based models III, Energy based models IV, and Introduction to Data Science or a Graduate Level Machine Learning.
Chapters:
00:00 – 2021 edition disclaimer
00:49 – Conditional and unconditional LV EBM
02:08 – Variables' name: x, y, z, h, ỹ
03:34 – LV EBM training recap, warm case
10:54 – LV EBM training recap, zero-temperature limit
11:30 – Today's plan: the missing step
12:08 – Target prop(agation)
19:01 – From target prop to autoencoder
20:54 – Reconstruction costs
21:06 – Loss functional
21:22 – Under and over complete hidden layer
24:40 – Denoising autoencoder
32:00 – Contractive autoencoder
37:50 – Autoencoders recap
38:38 – From autoencoder to variational autoencoder
45:17 – Comparison between variational autoencoder and denoising autoencoder
45:54 – How a variational autoencoder actually works
48:29 – The bubble-of-bubble variational autoencoder interpretation
1:00:08 – And that was it :)