Several problems when implement Bayes by backprop using edward

I’m a newbie of DL. I heard Edward when I used Pymc3 for a while and I decided to swing to Edward for my project. Now when I read the “ed.klqp” code and try to implement Bayes by backprop using edward, I have some problems seeking help.

  1. in the annotation of klqp, it is showed

based on the reparameterization trick [@kingma2014auto].

however, following the code I can’t see the trick which need to sample from standard normal. The code still sample values from q(z) not N(0,I). Is there something I have missed?

  1. in the context of mini batch, if my understanding is right, the klqp need to compute full log q(z) - log p(z) every loop. But in the Bayes by backprop, they use mean loss 1/M * (log q(z) - log p(z)) every batch, and M is the number of batch. Is there different performance between the two algorithm?