Factor analysis Example clarification

Hi,

I have a question in regards to the Factor Analysis Example.

In particular, I do not understand what the line

inference_m = ed.MAP(data={x: x_train, z: qz.params[inference_e.t]})

is doing. For completness, we have

z = Normal(loc=tf.zeros([FLAGS.N, FLAGS.d]),
         scale=tf.ones([FLAGS.N, FLAGS.d]))
logits = generative_network(z)
x = Bernoulli(logits=logits)
qz = Empirical(params=tf.get_variable("qz/params", [T, FLAGS.N, FLAGS.d]))
inference_e = ed.HMC({z: qz}, data={x: x_train})
inference_m = ed.MAP(data={x: x_train, z: qz.params[inference_e.t]})

Am I correct in assuming:
We have an empirical qz over z, but z are the only latent variables in the model. Yet, we also have a bunch of “ordinary” variables, which make up the generative_network, that transforms z. Hence, we also need to update those weights. Those weights may be seen as having a variational distribution of a pointMass. So, does it optimize those parameters, the parameters of the network?

Now, when we call

inference_m = ed.MAP(data={x: x_train, z: qz.params[inference_e.t]})

what does

z:qz.params[inference_e.t] 

do? Does it extract the empirical distribution of z (posterior as it is infered by HMC) and use it in the calculations?
Also, what happens behind the scenes when MAP is called without any latent_variables, only data?