Variational parameters on prior and likelihood variances

Hi there,

I’ve been playing around with the Edward and finding it great with the illustrative examples.
In the regression tutorial Edward – Supervised Learning (Regression) it states

assume σw2,σb2\sigma_w^2,\sigma_b^2σ​w​2​​,σ​b​2​​ are known prior variances and σy2\sigma_y^2σ​y​2​​ is a known likelihood variance. The mean of the likelihood is given by a linear transformation of the inputs xn\mathbf{x}_nx​n​​.

If one wanted to learn these, what kind of considerations would be needed?

For instance, for the prior variances, is it sufficient to put another ed.Normal() for the prior variance like such:

latent_vars={W: qW,
b: qb,
Wsig: qWsig,
bsig:qbsig }

and so on for additional latent variable definintions

Not sure if you solved this already, but you need a prior with positive support. A well-behaving one for training is the log Normal prior. Also see A toy normal model failed (klqp) and why? for some discussion of priors.

Cheers;

Yep found out that was the problem along with incorrectly specifying the prior inside the likelihood model

it’s working nicely now

1 Like