Hi there,
I’ve been playing around with the Edward and finding it great with the illustrative examples.
In the regression tutorial Edward – Supervised Learning (Regression) it states
assume σw2,σb2\sigma_w^2,\sigma_b^2σw2,σb2 are known prior variances and σy2\sigma_y^2σy2 is a known likelihood variance. The mean of the likelihood is given by a linear transformation of the inputs xn\mathbf{x}_nxn.
If one wanted to learn these, what kind of considerations would be needed?
For instance, for the prior variances, is it sufficient to put another ed.Normal() for the prior variance like such:
latent_vars={W: qW,
b: qb,
Wsig: qWsig,
bsig:qbsig }
and so on for additional latent variable definintions