VAE Example Question

Hello everyone,

First thanks for edward and all the help you guy’s provide to new users like me, awesome stuff what I have seen so far!

My problem is rather simple for you I guess, I want to do the following:
In the edward VAE example you guys plot the ELBO while training, by calling the inference.update routine.

I want to be able to plot the ELBO (or more precise the KL Divergence) between the “true distribution” and the approximated distribution z without optimising . So I can train a system on data, then “freez” it and plug in unseen data and watch the ELBO getting worse again if the data is very different to the training seen samples. This could be a simple anomaly detection approach. How would I do that?

I also dived into your code a little and saw that your calculating the KL-Divergence in the klqp.py file in
def build_reparam_kl_loss_and gradients section. But you also apply some kind of scaling to it, why?

Thanks for all the work!
Steffen