Skip to main content

Autoencoders With PyTorch and Generative Adversarial Networks (GANs)

Difficulty level
Advanced
Duration
1:07:50

This tutorial covers the concepts of autoencoders, denoising encoders, and variational autoencoders (VAE) with PyTorch, as well as generative adversarial networks and code. It is a part of the Advanced energy based models modules of the the Deep Learning Course at NYU's Center for Data Science. Prerequisites for this course include: Energy-Based Models IEnergy-Based Models IIEnergy-Based Models IIIEnergy-Based Models IV, Energy-Based Models V, and an Introduction to Data Science or a Graduate Level Machine Learning course.

Topics covered in this lesson

Chapters: 

00:00 – 1st of April 2021
03:24 – Training an autoencoder (AE) (PyTorch and Notebook)
11:34 – Looking at an AE kernels
15:41 – Denoising autoencoder (recap)
17:33 – Training a denoising autoencoder (DAE) (PyTorch and Notebook)
20:59 – Looking at a DAE kernels
22:57 – Comparison with state of the art inpainting techniques
24:34 – AE as an EBM
26:23 – Training a variational autoencoder (VAE) (PyTorch and Notebook)
36:24 – A VAE as a generative model
37:30 – Interpolation in input and latent space
39:02 – A VAE as an EBM
39:23 – VAE embeddings distribution during training
42:58 – Generative adversarial networks (GANs) vs. DAE
45:43 – Generative adversarial networks (GANs) vs. VAE
47:11 – Training a GAN, the cost network
50:08 – Training a GAN, the generating network
51:34 – A possible cost network's architecture
54:33 – The Italian vs. Swiss analogy for GANs
59:13 – Training a GAN (PyTorch code reading)
1:06:09 – That was it :D