Hello,

In order to get a better understanding of Edward mechanic (and also because it will be useful for my research), I would like to implement the Renyi variational objective (paper) in Edward. A public implementation in Tensorflow is available (github), so I though it would be quite an easy project to start with.

I’ve attached a 1st version of the code and an example on VAE.

It’s running, does something sensible but I still think it’s not completely correct.

Can someone please check the logic?

Code: inference object — example

Here are some details:

To compute the Renyi ELBO they use 3 tricks:

- Reparametrization trick
- Stochastic approximation of the joint likelihood
- Monte Carlo approximation of the VR bound

So I thought using the klqp and more specifically the reparametrized non analytic version “build_reparam_loss_and_gradients” as a template was a good start.

If I have understand correctly, in “p_log_prob” and “q_log_prob” there’s a “n_samples” estimate of the joint likelihood and the variational approximate.

From the docstring of “build_reparam_loss_and_gradients”:

Computed by sampling from $q(z;\lambda)$ and evaluating the expectation using Monte Carlo sampling.

So I think that should be trick 2 and 3 covered. Are I’m completely wrong?

After looking at the code from klqp, I’m not sure where the reparametrization trick is applied. But I think I’ve done everything in the same way so it should be used. Can someone confirm/help with that?

Thanks,

Jb