I think I must be missing something very obvious about how to sampling works in Edward and TesnorFlow. Can someone point out to me what I’m doing wrong?
I want to construct a model of a 1-d Gaussian, and then sample that model N times, and then do a MLE fit of the model to that sample. As I’m going to do a fit, my Edward model needs to be built with the parameters as tf.Variable()
. However, I don’t see any way to be able to get the sample and model to have the same shape.
I want to be able to pass in an Edward model for sampling, so @willtownes’s solution and the examples in the tutorials that build the sample from a numpy
distribution and then build an Edward model to fit to the sample won’t work here. I am expecting the user to be able to pass in more complex composed models and so I don’t want to have them have to build it in Edward and then build it in numpy as well.
Below is an example of my problem. Can anyone point out to me what I’m missing? I apologize as I realize this is not really an Edward problem, but rather me just not understanding an aspect of TensorFlow well enough, given that ed.models.Normal
inherits largely from tf.contrib.distributions.Normal
.
import numpy as np
import tensorflow as tf
import edward as ed
# specific modules
from edward.models import Normal
def sample_model(model, n_samples):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
samples = sess.run(model.sample([n_samples]))
return samples
# want to perform a fit, so need to use variables
mean = tf.Variable(3.0)
std = tf.Variable(1.0)
N = 100
x = Normal(loc=mean, scale=std)
samples = sample_model(x, N)
print("\nx is a {0} with shape {1}".format(type(x), x.get_shape()))
print("samples is a {0} with shape {1}".format(type(samples), samples.shape))
# fails as x and samples don't have the same shape
#mle = ed.MAP({}, data={x: samples})
# Alternative
x = Normal(loc=mean*tf.ones(N), scale=std*tf.ones(N))
samples = sample_model(x, N)
print("\nx is a {0} with shape {1}".format(type(x), x.get_shape()))
print("samples is a {0} with shape {1}".format(type(samples), samples.shape))
print("samples[0] is a {0} with shape {1}".format(type(samples[0]), samples[0].shape))
# fails as x and samples don't have the same shape
#mle = ed.MAP({}, data={x: samples})
# works but is hugely inefficient, as only using 1 row of a N x N tensor
mle = ed.MAP({}, data={x: samples[0]})
mle.run()
sess = ed.get_session()
print(sess.run(mean))
# As ed.models.Normal inherits from tf.contrib.distributions.Normal the results are the same with pure TF
x = tf.contrib.distributions.Normal(loc=mean*tf.ones(N), scale=std*tf.ones(N))
samples = sample_model(x, N)
print("\nx is a {0} with shape {1}".format(type(x), x.event_shape))
print("samples is a {0} with shape {1}".format(type(samples), samples.shape))
Most likely unimportant, but for good habit:
$ pip freeze | egrep 'tensorflow|edward'
edward==1.3.3
tensorflow==1.2.0