Why the model is inverted when using variational inference?

Take for example the code in here:

the model is defined as:

z = Normal(loc=tf.zeros([FLAGS.M, FLAGS.d]),
         scale=tf.ones([FLAGS.M, FLAGS.d]))
logits = generative_network(z)
x = Bernoulli(logits=logits)

and the inference as:

x_ph = tf.placeholder(tf.int32, [FLAGS.M, 28 * 28])
loc, scale = inference_network(tf.cast(x_ph, tf.float32))
qz = Normal(loc=loc, scale=scale)

You can see how the model is: z → logits → x
and the inference is: x_ph → loc, scale → qz