I am trying to use Probabilistic PCA for dimensional reduction. I follow the code in tutorial, and try to make a modification to output the result.
N = 5000 # number of data points
D = 2 # data dimensionality
K = 1 # latent dimensionality
x_train = build_toy_dataset(N, D, K)
w = Normal(loc=tf.zeros([D, K]), scale=2.0 * tf.ones([D, K]))
z = Normal(loc=tf.zeros([N, K]), scale=tf.ones([N, K]))
x = Normal(loc=tf.matmul(w, z, transpose_b=True), scale=tf.ones([D, N]))
x_as_in=tf.placeholder(tf.float32, [1, D]) ###I want to calculate case one by one and the features are given in ##########column, x_as_in is the input
z_as_out=Normal(loc=tf.matmul(w, x_as_in, transpose_a=True,transpose_b=True), scale=tf.ones([K, 1]))## z_as_out should be the result after dimension reduction
qw = Normal(loc=tf.Variable(tf.random_normal([D, K])),
scale=tf.nn.softplus(tf.Variable(tf.random_normal([D, K]))))
qz = Normal(loc=tf.Variable(tf.random_normal([N, K])),
scale=tf.nn.softplus(tf.Variable(tf.random_normal([N, K]))))
inference = ed.KLqp({w: qw, z: qz}, data={x: x_train})
inference.run(n_iter=500, n_print=100, n_samples=10)
sess = ed.get_session()
AA=np.array([[1,2]]) ## I take a simple case here
z_post=ed.copy(z_as_out, {w: qw})
z_gen=sess.run(z_post, feed_dict={x_as_in: AA})
After I run the above code, z_gen is a scalar with some mean but 0 std. I expect that z_gen should be a distriibution. I also cannot find a sample() mehtod of z_gen. Is there anything wrong in my code?