Question regarding inference API


#1

Hi, thank you for the work of this great package! I have a question regarding inference API.

According to the inference tutorial, inference API is as simple as running one method:
inference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})

To my knowledge, to optimize ELBO, variational inference requires defining the family of variational distribution q(z; lambda) and joint likelihood p(x, z; theta). Does Edward assumes implicitly that both are normal distribution? If so, can we override the assumption and pass for example a non-Gaussian joint likelihood function p(x, z; theta) instead?

Thanks,
Shan


#2

I figured out that the variable qz and qbeta defines the variational distribution :slight_smile: but I am still puzzled about how to set p(x, z) or p(x|z)? thanks


#3

In English, that inference line says, “Infer z (using qz) and beta (using qbeta) given that we observe x = x_train”. In math, this is q(z, beta; parameters) \approx p(z, beta | x = x_train). The collection of model variables x, z, beta define your joint distribution.


#4

Ah got it! thanks!

The point that initially got me confused is that I thought x, z are method argument names. As I was reading the inference documentation first, there is no context of them. I don’t know if it make sense to give a complete example in inference API doc, from defining x, z, beta, qz, qbeta to the inference line, although this concept will become clear when the user read other documentations. but thanks dustin!