Skip to content

2-dimensional visualization of the latent space learned by a deep autoencoder on MNIST

Notifications You must be signed in to change notification settings

gr-b/autoencoder-latent-space-visualization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autoencoder Latent Space Exploration

exploration

Explanation

On the left is a visual representation of the latent space generated by training a deep autoencoder to project handwritten digits (MNIST dataset) from 784-dimensional space to 2-dimensional space. Notice how the autoencoder learns a clustered representation. When hovering over a pixel on this graph, the current mouse coordinates are used to project back to 784-dimensional space and the decoded vector is shown on the right.

The code in ae.py is clean and simple and is intended to aid understanding of autoencoders.

Usage

To try this out, clone the repository and run

python ae.py

This requires tensorflow, matplotlib, numpy and tkinter (python3). If that's slow, try running

python ae_precomp.py

This file precomputes all reachable digit decodings instead of computing them in real time on hover, and is faster in some cases.

About

2-dimensional visualization of the latent space learned by a deep autoencoder on MNIST

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages