I’m a newbie of DL. I heard Edward when I used Pymc3 for a while and I decided to swing to Edward for my project. Now when I read the “ed.klqp” code and try to implement Bayes by backprop using edward, I have some problems seeking help.
- in the annotation of klqp, it is showed
based on the reparameterization trick [@kingma2014auto].
however, following the code I can’t see the trick which need to sample from standard normal. The code still sample values from q(z) not N(0,I). Is there something I have missed?
- in the context of mini batch, if my understanding is right, the klqp need to compute full
log q(z) - log p(z)
every loop. But in the Bayes by backprop, they use mean loss1/M * (log q(z) - log p(z))
every batch, andM
is the number of batch. Is there different performance between the two algorithm?