Inference from within the model with prediction result


#1

Hi there!
new Edward user here.

I’ve just gone through the bayesian linear regression tutorial ( http://edwardlib.org/tutorials/supervised-regression ) and at the end there is a method called visualise(). Inside this is this line:

output = inputs * w_samples[ns] + b_samples[ns]

this appears to be replicating, in python, the tensorflow code which defines the model:

y = Normal(loc=ed.dot(X, w) + b, scale=tf.ones(N))

The question I have is, what is the recommended method of just using the already defined tensorflow model, (in this case to do a prediction) for a given test point (X_test in this case) - as opposed to copying all the parameters out and replicating the model in python which seems like it should be unecessary

It seems ed.evaluate() performs this, however it does not return the inference value but rather is used for checking errors.

thanks!

EDIT: Is it simply:

y.eval( feed_dict={X: X_test})

?


#2

quick follow up if anyone has trouble getting predictions

simply using y.eval… is incorrect as one needs to use the variational parameters!

this existing post (How to obtain prediction results) contains the solution which is essentially the same as above, except using ed.copy with an additional feeddict for the latent variables