The "clutter" problem


Just started using Edward. Kudos to the authors for creating this library!

I went through the documentation and tried to implement/solve a one-dimensional Clutter problem mentioned in Minka’s EP paper and in Bishop’s pattern recognition book Section-10.7.1

From the paper.
59 pm

I’m trying to estimate the posterior over x using variational inference. This is the code I have written so far.

from edward.models import Normal

x_data = observed_data.copy()[:500] ## generated some 1D data with p(x|theta) = 0.5*N(theta, 1) + 0.5*N(0, 5)
## the mean, std of the noise is 0 and 5 respectively.

prior_theta = Normal(loc=[0.0], scale=[10.0])
prob_x_given_prior = 0.5*Normal(tf.ones(500)*prior_theta, tf.ones(500)*1.0) + 0.5*Normal(tf.zeros(500), tf.ones(500)*5.0)

q_mean = tf.Variable(tf.zeros(1))
q_scale = tf.nn.softplus(tf.Variable(tf.ones(1)))
q_theta = Normal(loc=q_mean, scale=q_scale)

with tf.Session() as sess:
    inference = ed.KLqp({prior_theta: q_theta}, data={prob_x_given_prior : x_data[:500]})
    print q_mean.eval(), q_scale.eval()
1000/1000 [100%] ██████████████████████████████ Elapsed: 2s | Loss: 0.000  10/1000 [  1%]                                ETA: 66s | Loss: 1.114  ETA: 1s | Loss: 0.000 ETA: 1s | Loss: 0.000
[ 0.] [ 9.9999733]

The mean of the posterior comes out to be 0 after the inference runs. It seems like a fairly simple problem so I was just wondering if I’m missing something obvious.