II have been working on a fairly complex model involving a 2 state HMM. There is a bernoulli distribution used to derive the random variable which serves as the observable of the hidden states. When the p value of the bernoulli is very small, around 0.001, the model consistently underestimate one of the logit and overestimate the other. If I bump up the p value by 10 x, to 0.01, then I am able to get much more accurate estimates. These numbers go onto the exponential, so a small error makes a big difference

This is clearly a numerical precision challenge. What’s the Edward way of attacking this? Does increase the sample count by orders of magnitude help? I tried a number of sample counts, and don’t see improvement. Why doesn’t large sample count make any notable difference?

Then I played with the scale (see eqn below, tried range 0.5 to 7.0). made no difference at all in the output.

What else is there to try?

The parameters are declared as Normal

`alpha = Normal(loc=tf.zeros([FLAGS.S], dtype=tf.float32), scale=7.0 * tf.ones([FLAGS.S], dtype=tf.float32))`

where FLAGS.S=2

I am using ed.KLqp for inferencing. Is HMC supposed to be a more accurate method?

if that helps.