Different training and testing dimensions

In the Bayesian linear regression tutorial, if I set

X_test = np.vstack([X_test,X_test])
y_test = np.hstack([y_test,y_test])

such that now the training and testing dimensions are different, how do I use

ed.evaluate('mean_squared_error', data={X: X_test, y_post: y_test})

I’ve tried setting the placeholder to [None,None] dimensions or changing the placeholder to another one with test dimensions, but that didn’t work.

Double check that X and y_post have None in their first dimension. If you only change the placeholders, I think the random variable y will still have a fixed dimension. This is because of the way the scale parameter is defined:

y = Normal(loc=ed.dot(X, w) + b, scale=tf.ones(N))

Rewriting this in the broadcastable form,

y = Normal(loc=ed.dot(X, w) + b, scale=1.0)  # alternatively, scale=tf.ones(X.shape[0])

should work.

1 Like