Implicit model inferences example


#1

Hi all:)

I’m trying out Edward inference on a situation, where the dataset is divided into several groups, with each group relating to a latent local variable ‘z_i’. There is also an global variable ‘beta’ that is share across the whole dataset. The 'z_i’s and the ‘beta’ both determine the output y_i from input x_i. So y_i = f(x_i,z_i,beta) (i being the group number)

I’ve tried to use ed.KLqp for inference, where I define a x_i,y_i,z_i for each group and have one beta for the all groups. The inference looks somewhat like this:

Inference = ed.KLqp({z_1:qz_1, z_2:qz_2, beta:q_beta}, data = {x_1: X_1, x_2: X_2, y_1: Y_1, y_2: Y_2})

However the result (mean of inferred distribution and the prediction MSE) are not so good:( So I’m wondering should I use another inference method for this partially implicit model.

I think that ed.ImplicitKLqp would suit this task the best, however there is no examples of that method, so I wish to ask if anyone has used that one and have any advise on how to write out the model. Or, generally any debugging/tuning advises are welcome, I’m stuck there:(

Thanks so much!!!
Alex

The code looks like this:

priors = {}
datas = {}
# Important: define the model/priors
priors['theta'] = Normal(loc=tf.zeros(Dw+Dx), scale=1 * tf.ones(Dx+Dw))
priors['b'] = Normal(loc=tf.zeros(1), scale=1 * tf.ones(1))


for i in range(num_context):
    datas['X'+str(i)] = tf.placeholder(tf.float32, [None, Dx])
    priors['w'+str(i)] = Normal(loc=tf.zeros(Dw), scale=1 * tf.ones(Dw))
    datas['y'+str(i)] = lr_model(datas['X'+str(i)],priors['w'+str(i)],priors['theta'],priors['b'])[0]

# Creating posterior mappings and inferences
post = {}

post['theta'] = Normal(loc=tf.get_variable("qtheta/loc", [Dx+Dw]), scale=tf.nn.softplus(tf.get_variable("qtheta/scale", [Dx+Dw])))
for i in range(num_context):
    post['w'+str(i)] = Normal(loc=tf.get_variable("qw_{}/loc".format(i), [Dw]),
                      scale=tf.nn.softplus(tf.get_variable("qw_{}/scale".format(i), [Dw])))

post['b']= Normal(loc=tf.get_variable("qb/loc", [1]), scale=tf.nn.softplus(tf.get_variable("qb/scale", [1])))

prior_post_mapping = {priors[param_name]: post[param_name] for param_name in priors.keys()}
data_mapping = {datas[data_name]:train_global[data_name] for data_name in datas.keys()}
print(prior_post_mapping.keys())
print(data_mapping.keys())

inference = ed.KLqp(prior_post_mapping, data=data_mapping)
inference.run(n_samples=10, n_iter=inf_n_iter)