wset2
January 8, 2018, 7:02pm
1
Hi
In the VAE example , we model binarized 28 x 28 pixel images from MNIST. For the d=2 case, is there a way to visualize the latent space such as here: https://blog.keras.io/building-autoencoders-in-keras.html .
I just think it is useful in general if the latent space can be visualized, other than looking at the negative log likelihood.
Thank you!
dustin
January 8, 2018, 7:41pm
2
You can feed data into the placeholder and fetch samples from qz
. Fetching qz.mean()
over all test inputs will give you something like Figure 3(b) in the deep latent Gaussian models paper .
wset2
January 8, 2018, 10:57pm
3
Is there some example code where I can work from to produce such a figure? I am still not too experienced with Edward.
dustin
January 8, 2018, 11:36pm
4
The following should work:
sess = ed.get_session()
encoded_mean_test = sess.run(qz.mean(), {x_ph: x_test})
You can append it post-training (to the end of the VAE script) and then plot the two latent dimensions.
1 Like
wset2
January 11, 2018, 5:04pm
5
That’s fine now. Thank you