Clarification expectation maximization

Finally am able to get some models fitted on some synthetic data, and the inference is blazing fast and on spot. Seriously stellar work putting this package together!

Minor question about the expectation-maximization procedure.

In the inference-compositionality tutorial, the following expectation maximization algorithm is given.

from edward.models import Categorical, PointMass

qbeta = PointMass(params=tf.Variable(tf.zeros([K, D])))
qz = Categorical(logits=tf.Variable(tf.zeros[N, K]))

inference_e = ed.VariationalInference({z: qz}, data={x: x_data, beta: qbeta})
inference_m = ed.MAP({beta: qbeta}, data={x: x_data, z: qz})
...
for _ in range(10000):
  inference_e.update()
  inference_m.update()

Please correct me if I am wrong, but wouldn’t every update on inference_e return a full posterior distribution, rather than an expectation. This seems to be more like stochastic EM rather than actual EM since the expectations are never calculated right?

If the above is correct, how can the above code be modified so that expectations are passed into the M-step?

Thanks!!!