Hey, thanks for the great library, looking forward to building stuff in it.
I’m just trying to get my head around things, as I’m coming from stan (but I’m familiar with tensorflow also), and I’m a little confused about the linear regression example, where you assume the scales of the priors and the likelihoods are known. This seems fine for the priors, but confusing for the scale of the model
X = tf.placeholder(tf.float32, [N, D]) w = Normal(loc=tf.zeros(D), scale=tf.ones(D)) b = Normal(loc=tf.zeros(1), scale=tf.ones(1)) y = Normal(loc=ed.dot(X, w) + b, scale=tf.ones(N))
In an equivalent stan example, I would specify scale as its own parameter with its own prior and posterior families, which represents the uncertainty in the data. But here it is set to a fixed value. Is there something edward does behind the scenes that allows the scale param of
y to vary, even though it was initialised as a
tf.Tensor and not a
I think I understand that the scale of
b can change since their posteriors
qw and defined with a
tf.Variable for scale and location, but I can’t see where the same is done for y.