Hey @dustin thank you for the response and tip about value and not needing the _sample_n method for KLqp. MAP for the beta-binomial model seems to run (i.e. no nans) but it doesn’t give good estimates …
'''Estimating binomial success prob using MAP
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import edward as ed
import tensorflow as tf
from edward.models import Beta, Binomial, PointMass
# ---- generate data ----
n = 20
pi_true = .4
n_obs = 100
x_data = np.random.binomial(n=n, p=pi_true, size=n_obs)
# ---- define model ----
pi = Beta(1.0, 1.0)
x = Binomial(n * tf.ones(n_obs),
pi * tf.ones(n_obs),
value=tf.zeros(n_obs, dtype=tf.float32))
# ---- perform inference ----
qpi = PointMass(tf.sigmoid(tf.Variable(tf.random_normal([]))))
inference = ed.MAP({pi: qpi}, data={x: x_data})
inference.run(n_iter=2000)
# ---- true vs posterior mean ----
print('pi_true={}\npi_hat={}'.format(pi_true, qpi.eval()))
with this typical output …
2000/2000 [100%] ██████████████████████████████ Elapsed: 1s | Loss: 243.026
pi_true=0.4
pi_hat=0.0009102463955059648
I also tried running the beta-bernoulli map example but similarly did not get good estimates. Do you have any thoughts for why this would be the case? Thanks!
Joe