Hello,
would it be possible to know about the following: I do not understand what should be feed into the inference method and what should be feed at each update() into the inference method MAP.
- Mixture Density Networks Tutorial:
Model:
locs, scales, logits = neural_network(X_ph)
…
y = Mixture(cat=cat, components=components, value=tf.zeros_like(y_ph))
Inference: WHY DOES IT NOT INCLUDE X_PH????
inference = ed.MAP(data={y: y_ph})
Update: Here we pass all the data.
inference.update(feed_dict={X_ph: X_train, y_ph: y_train})
I am trying to optimize the following problem:
Model:
W_1 = tf.Variable(weights['W1'], name="W_1")
W_2 = tf.Variable(weights['W2'], name="W_2")
W_3 = tf.Variable(weights['W3'], name="W_3")
W_out = tf.Variable(weights['Wout'], name="Wout")
b_1 = tf.Variable(biases['b1'], name="b_1")
b_2 = tf.Variable(biases['b2'], name="b_2")
b_3 = tf.Variable(biases['b3'], name="b_3")
X = tf.placeholder(tf.float32, [N, D], name="X")
y_ph = tf.placeholder(tf.float32, [N, D], name="y_ph")
y = Normal(loc=neural_network(X, W_1, W_2, W_3, W_out, b_1, b_2, b_3),
scale=0.25 * tf.ones(N, 1), name="y")
y = tf.reshape(y, [N, 1])
Inference:
inference = ed.MAP([W_1, b_1, W_2, b_2, W_3, b_3, W_out], data={X: xx, y: y_ph})
optimizer = tf.train.AdamOptimizer(5e-3)
inference.initialize(optimizer=optimizer, var_list=tf.trainable_variables())
sess = ed.get_session()
tf.global_variables_initializer().run()
n_epoch = 1000
train_loss = np.zeros(n_epoch)
for i in range(n_epoch):
info_dict = inference.update(feed_dict={X: xx, y: yy})
train_loss[i] = info_dict['loss']
inference.print_progress(info_dict)