Some confusion about inferring local latent variables from local data


Hi Dustin

I have some confusion about inferring local latent variables by Edward inference, say we have different local latent variables z_n for different groups of datapoints x_n, and all z_n share the same prior distribution with a global latent variable \beta, if I input all datapoints into Inference({z:qz, \beta: q\beta},{x:x_data}), can z_n be updated accordingly by x_n? Or do I need to create different inference for each z_n? I think it should be the former case, but I’m not familiar with tensorflow and didn’t find related documentation.
Sorry to bother you, thanks in advance for your help~!


Thanks for asking.

The semantics of Inference({z: qz, beta: qbeta}, {x: x_data}) is understood verbatim as “infer the distribution p(z, beta | x = x_data) using the approximating distribution q(z, beta)”. This holds irregardless of where certain latent variables are local or global. Or to answer the question more directly, if you input all data points x_n into inference, all z_n’s will be updated.

How inference updates latent variables depends on the choice of algorithm. For example, if you only want to update a subset of local variables at a time according to minibatches of data, see Edward’s data subsampling guide.


That’s very helpful, thank you very much!