Is it possible to remove the Normal constraint of output variable?

I find in the tutorials, the output variables are usually assumed to be normal with mean equal to some expression and a user-defined standard deviation. In linear regression
y = Normal(loc=ed.dot(X, w) + b, scale=tf.ones(N))
In Bayesian network
y = Normal(loc=neural_network(x), scale=0.1 * tf.ones(N))

Is it possible just to make y just equal to some user-defined expression, for example,
y = ed.dot(X, w) + b in linear regression?

Yes. This is known as an implicit model. However, the difficulty is that y does not necessarily have a density. For simple transforms as above, there is no magic to find its induced distribution unless you specify the log-determinant with TransformedDistribution. Writing out the random variables explicitly is recommended both from a computational level and conceptually.

Thank you dustin. But usually how to select the scale in practice? For example, in tutorial of Bayesian network,scale=0.1 * tf.ones(N). Is the coefficient 0.1 chosen to minimize error? In the case when the sample size is small, this value may be hard to evaluate.

1 Like