Hi

I am attempting to use black box alpha divergence minimization (paper) for my research and I would also like to gain familiarity in Edward and TensorFlow. I have been basing my code from klqp.py and jbr’s Renyi Divergence inference methods.

The reparameterized BB-alpha ELBO is

$ \mathcal{L}_{\alpha}(q) \approx \text{KL}[q||p_0] - \frac{1}{\alpha} \sum_n \log E_q[f_n(\omega)^\alpha] $

I have not finished a working version of the code as I am having trouble coding the likelihood term $ f_n(\omega)^\alpha $. This will differ for different tasks, e.g. for a regression task the log likelihood term may be

$ \log p(\mathbf{y} \mid \mathbf{w}, b, \mathbf{X}) = \sum_n \log \text{Normal}(y_n \mid \mathbf{x}_n^\top\mathbf{w} + b, \sigma_y^2) $

How would I obtain this likelihood term? Would it be from the training data?

From what I understand, I may only be required for now to change the loss function in line 650 in klqp.py . Another possibly unrelated question is why in “build_reparam_kl_loss_and_gradients”, the log likelihood is obtained in exactly in the same way as the log posterior in “build_reparam_loss_and_gradients” ?

Thank you

wset2