Why is there significant difference between these two methods in prediction?

I am using Bayesian linear regression y=betaX+alpha for prediction. After inference, I tried to used 2 methods:
(1) y_post.eval(feed_dict={X_in: X4pre})
(2) I first evaluate the mean value of beta and alpha by beta_post.mean().eval() and alpha_post.mean().eval(), then use the equation y=beta
X+alpha to calculate the mean of y.
The predicted mean of y are quite close when I run the tutorial example: http://edwardlib.org/tutorials/supervised-regression
But when I tried to feed the model with some real noisy data from financial market, the predicted mean of y is quite different between these two methods. Is there any clue to solve this problem?

I find the problem. I am using SGHMC for inference. It seems that Edward does not implement a 'burn in ’ method as defaul in y_post.eval(), beta_post.mean(), and alpha_post.mean(). If I output all the samples of beta_post and alpha_post and drop the initial hundreds samples the results are exactly the same, but both different from y_post.eval(). It seems there is some problme when using y_post.eval()

1 Like