# Variational Gaussian Process (GP) Regression / 'Tensor' object has no attribute 'log_prob'

Hi all,

So I am trying to implement Gaussian Process (GP) regression in Edward. For info - I have read the tutorial on GP classification from the docs, as well most of the other tutorials/docs.

First off I implemented the GP regression simply by sampling from the analytic conditional distribution: A notebook which (I think) does this correctly is viewable here

I am now trying to implement the same thing but using variational inference. My attempt it available viewable here

I am getting an error - ‘Tensor’ object has no attribute ‘log_prob’. The core of the code is:

``````X = tf.placeholder(tf.float32, [N_train, 1])
Xstar = tf.placeholder(tf.float32, [N_test, 1])

Xfull = tf.concat([X, Xstar], axis=0)

Kfull = ed_rbf(Xfull, Xfull)
Kfull = diag_jitter(Kfull, eps=eps)

mean = tf.zeros(N_train + N_test)
yfull = MultivariateNormalTriL(loc=mean, scale_tril=tf.cholesky(Kfull))

y = yfull[:N_train]
ystar = yfull[N_train:]

qystar = Normal(loc=tf.get_variable("qystar/loc", [N_test]),
scale=tf.nn.softplus(tf.get_variable("qystar/scale", [N_test])))

inference = ed.KLqp({ystar: qystar},
data={X: X_train, Xstar: X_test, y: y_train})
inference.run(n_iter=500)
``````

I wonder if anyone knows:

1. What I could be doing wrong?
2. Is the way I am implementing this (variational) inference even sensible? I basically write down the full joint probability then try to say that some of the y values are observed. But maybe this is ugly/inefficient or just plain wrong?
1 Like

I can only help with 1: `y` and `ystar` are slices of the RandomVariable `yfull`. The result of a slice is a `tf.Tensor`, not a RandomVariable. You need RandomVariables in the `latent_vars` part of ed.KLqp.

Not sure how to best express in edward what you want to do in 2. Maybe only define the model for the observed part without the slicing?

Ideally, we’d be able to support gather/slicing ops to return RVs and not Tensors. I’ll note this as a feature request in TensorFlow Probability as we’ve also been thinking about this: https://github.com/tensorflow/probability/issues/5

Cool, that would be great!