Hello. If we considere this block of code:
true_param1=[1.0,5.0,15.0]
data = Categorical(probs=true_param1,sample_shape=[1,5000])
param1 = Dirichlet(tf.constant([1.0,1.0,1.0]), name='param1')
qparam1 = Dirichlet(tf.nn.softplus(tf.Variable(tf.constant([1.0,1.0,1.0])), name="qparam1"))
w = Categorical(probs=param1, sample_shape=[1,5000],name="w")
inference = ed.KLqp({param1: qparam1}, data={w: data})
inference.run(n_iter=2000)
print(session.run(qparam1.mean()))
That works. But now if we change just a little bit, the 5th line, by passing param1 in a function :
w = Categorical(probs=funct(param1), sample_shape=[1,5000],name=“w”)
In a lot of cases the inference won’t work. If we define for example:
funct(param1):
return param1.eval()
That will not infer correctly the toy example [1.0,5.0,15.0], instead we have [1/3,1/3,1/3]…
I know that param1 and param1.eval() are not the same type (ed.RV and np.array).
Is that meaning that I can’t do anything on param1? Except basic operations as “return 2*param1” ?
Although in my case funct have to return something more complex than this : a tensor which is result of convolution implying tf.constant(param1.eval()) and other tensors, but how am I supposed to do this as the inference is totally false using just a basic “return param1.eval()” or “return tf.constant(param1.eval())” ?
With Pyro I did it quickly but I don’t know how to manage with Edward/tf…