Hi all,
So I am trying to implement Gaussian Process (GP) regression in Edward. For info - I have read the tutorial on GP classification from the docs, as well most of the other tutorials/docs.
First off I implemented the GP regression simply by sampling from the analytic conditional distribution:
A notebook which (I think) does this correctly is viewable here
I am now trying to implement the same thing but using variational inference. My attempt it available viewable here
I am getting an error - ‘Tensor’ object has no attribute ‘log_prob’. The core of the code is:
X = tf.placeholder(tf.float32, [N_train, 1])
Xstar = tf.placeholder(tf.float32, [N_test, 1])
Xfull = tf.concat([X, Xstar], axis=0)
Kfull = ed_rbf(Xfull, Xfull)
Kfull = diag_jitter(Kfull, eps=eps)
mean = tf.zeros(N_train + N_test)
yfull = MultivariateNormalTriL(loc=mean, scale_tril=tf.cholesky(Kfull))
y = yfull[:N_train]
ystar = yfull[N_train:]
qystar = Normal(loc=tf.get_variable("qystar/loc", [N_test]),
scale=tf.nn.softplus(tf.get_variable("qystar/scale", [N_test])))
inference = ed.KLqp({ystar: qystar},
data={X: X_train, Xstar: X_test, y: y_train})
inference.run(n_iter=500)
I wonder if anyone knows:
- What I could be doing wrong?
- Is the way I am implementing this (variational) inference even sensible? I basically write down the full joint probability then try to say that some of the y values are observed. But maybe this is ugly/inefficient or just plain wrong?