Skip to main content

Autoencoders with PyTorch and generative adversarial newtorks (GANs)

Difficulty level
Advanced
Duration
1:07:50

This tutorial covers the concepts of autoencoders, denoising encoders, and variational autoencoders (VAE) with PyTorch, as well as generative adversarial networks and code. It is a part of the Advanced energy based models modules of the the Deep Learning Course at CDS, a course that covered the latest techniques in deep learning and representation learning, focusing on supervised and unsupervised deep learning, embedding methods, metric learning, convolutional and recurrent nets, with applications to computer vision, natural language understanding, and speech recognition. Prerequisites for this course include: Energy based models IEnergy based models IIEnergy based models IIIEnergy based models IV, Energy based models V, and Introduction to Data Science or a Graduate Level Machine Learning.

Topics covered in this lesson

Chapters: 

00:00 – 1st of April 2021
03:24 – Training an autoencoder (AE) (PyTorch and Notebook)
11:34 – Looking at an AE kernels
15:41 – Denoising autoencoder (recap)
17:33 – Training a denoising autoencoder (DAE) (PyTorch and Notebook)
20:59 – Looking at a DAE kernels
22:57 – Comparison with state of the art inpainting techniques
24:34 – AE as an EBM
26:23 – Training a variational autoencoder (VAE) (PyTorch and Notebook)
36:24 – A VAE as a generative model
37:30 – Interpolation in input and latent space
39:02 – A VAE as an EBM
39:23 – VAE embeddings distribution during training
42:58 – Generative adversarial networks (GANs) vs. DAE
45:43 – Generative adversarial networks (GANs) vs. VAE
47:11 – Training a GAN, the cost network
50:08 – Training a GAN, the generating network
51:34 – A possible cost network's architecture
54:33 – The Italian vs. Swiss analogy for GANs
59:13 – Training a GAN (PyTorch code reading)
1:06:09 – That was it :D