Visualizing 2D latent space of VAE example


#1

Hi

In the VAE example, we model binarized 28 x 28 pixel images from MNIST. For the d=2 case, is there a way to visualize the latent space such as here: https://blog.keras.io/building-autoencoders-in-keras.html .

I just think it is useful in general if the latent space can be visualized, other than looking at the negative log likelihood.

Thank you!


#2

You can feed data into the placeholder and fetch samples from qz. Fetching qz.mean() over all test inputs will give you something like Figure 3(b) in the deep latent Gaussian models paper.


#3

Is there some example code where I can work from to produce such a figure? I am still not too experienced with Edward.


#4

The following should work:

sess = ed.get_session()
encoded_mean_test = sess.run(qz.mean(), {x_ph: x_test})

You can append it post-training (to the end of the VAE script) and then plot the two latent dimensions.


#5

That’s fine now. Thank you :grinning: