The epsilon was really useful for the rbf, thanks!
My reasoning for the entire N x N matrix was to capture the covariances between the data in the GP, because I’m having issues making predictions.
I make predictions by sampling the predictive posterior and then averaging.
post = ed.copy(y, {f: qf})
samples = [sess.run(post, feed_dict={X: x_test}) for _ in range(1000)]
The issue is that this produces the same mean as the posterior (conditioned on x_train) and doesn’t reflect the x_test data. I thought having the covariances would help.
Is there something wrong with what I’m doing or is there a better approach?
Did you ever get your GP to extrapolate to test points that weren’t in the training set? And could you get the test set to be a different size than the training set? I think I can get comparable GP results as I get from sklearn GaussianProcessRegressor but, so far, I can’t get predictions for other points.