Is the expectation propagation example running in distributed tensorflow?

I the document page The expectation propagation example runs two subsets of samples, z1 and z2 sequentially in a loop. Is the distribution of z1 and z2 in a gpu cluster automatic? Or do you need to add something manually to make them run in parallel? Otherwise, there will be no point breaking into two subsets to begin with.

It depends on what you mean by distributed. Edward isn’t currently supported on multi-machine environments. Running on multi GPUs is automatic. If you want to schedule the updates to be on different devices, try device registrations during the graph construction stage (i.e., each inference object’s initialize).