Hi, thank you for the work of this great package! I have a question regarding inference API.

According to the inference tutorial, inference API is as simple as running one method: inference = ed.Inference({z: qz, beta: qbeta}, {x: x_train})

To my knowledge, to optimize ELBO, variational inference requires defining the family of variational distribution q(z; lambda) and joint likelihood p(x, z; theta). Does Edward assumes implicitly that both are normal distribution? If so, can we override the assumption and pass for example a non-Gaussian joint likelihood function p(x, z; theta) instead?

In English, that inference line says, â€śInfer z (using qz) and beta (using qbeta) given that we observe x = x_trainâ€ť. In math, this is q(z, beta; parameters) \approx p(z, beta | x = x_train). The collection of model variables x, z, beta define your joint distribution.

The point that initially got me confused is that I thought x, z are method argument names. As I was reading the inference documentation first, there is no context of them. I donâ€™t know if it make sense to give a complete example in inference API doc, from defining x, z, beta, qz, qbeta to the inference line, although this concept will become clear when the user read other documentations. but thanks dustin!