I’m trying to implement a simple GP regressor in Edward to get the hang of things and understand the Edward API.
My issue is that I have no idea how to get predictions from the inferred posterior at points that have not been observed.
I understand that formulating a predictive posterior is just
post = ed.copy(y, {f: qf})
(which is amazing)
But I’m not too sure how to get predictions from this
Fetching post from a TensorFlow session represents one draw from the posterior predictive. Making predictions typically means taking the mean of the posterior predictive distribution. You can do this by writing
post_samples = []
for _ in range(100):
post_samples.append(sess.run(post))
np.mean(post_samples)
I guess Dustin did not tell you to use post.mean() because it requires that the mean is analytically tractable. Methods of RandomVariables do note estimate quantities through e.g. Monte Carlo by design [1].
While in the classical Gaussian likelihood case of GP regression the mean is analytically tractable, your example–strictly speaking–is not assuming that. E.g. all you say still holds a for Poisson likelihood. The general answer is hence to sample from the posterior, and compute the mean instead.
Note that alternatively, you could also define the MC sample completely in tensorflow and have a single sess.run(...) do the work. Sth along the lines of: