Beta binomial example

Hello,

I’ve been really enjoying edward and finding it very useful for my research! I’m trying to use a Binomial likelihood for a project and running into some issues with the inference. To illustrate the problem I modified this beta_bernoulli example and added the _sample_n method to the Binomial random variable class. When I simulate data, define the model, and use KLqp to perform inference the loss quickly goes to nan. Does anyone have insight into what could be going wrong here below. Thanks!

'''Estimating binomial success prob using vi
'''

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import edward as ed
import tensorflow as tf
from edward.models import Beta, Binomial

# ---- sampling method for Binomial ---
def _sample_n(self, n=1, seed=None):
    '''
    modified from http://edwardlib.org/api/model-development
    '''
    np_sample = lambda n, p, size: np.random.binomial(n=n, p=p, size=size).astype(np.int32)    
    
    # assume total_count and probs have same as shape
    val = tf.py_func(np_sample, 
                     [tf.cast(self.total_count, tf.int32), self.probs, self.probs.shape], 
                     [tf.int32])[0]
    
    batch_event_shape = self.batch_shape.concatenate(self.event_shape)
    shape = tf.concat([tf.expand_dims(n, 0), tf.convert_to_tensor(batch_event_shape)], 0)
    val = tf.reshape(val, shape)
    
    return val

Binomial._sample_n = _sample_n

# ---- check sampling ----
ed.get_session()
y = Binomial(10 * tf.ones([2, 2]), .5 * tf.ones([2, 2]))
print(y.sample().eval())
print(y.sample().eval())


# ---- generate data ----
n = 20
pi_true = .4
n_obs = 500
x_data = np.random.binomial(n=n, p=pi_true, size=n_obs)

# ---- define model ----
pi = Beta(concentration0=1.0, concentration1=1.0)
x = Binomial(n * tf.ones(n_obs), pi * tf.ones(n_obs))

# ---- perform inference ----
qpi = Beta(tf.nn.softplus(tf.Variable(tf.random_normal([]))), 
           tf.nn.softplus(tf.Variable(tf.random_normal([]))))

inference = ed.KLqp({pi: qpi}, data={x: x_data})
inference.initialize(n_iter=1000)

tf.global_variables_initializer().run()

loss = np.empty(inference.n_iter)
for i in range(inference.n_iter):
    info_dict = inference.update()
    inference.print_progress(info_dict)
    loss[i] = info_dict["loss"]

# ---- true vs posterior mean ----
print('pi_true={}\nposterior_mean={}'.format(pi_true, qpi.mean().eval()))

# inspect where loss goes to nan 
# print(loss) 

Binomial’s sample method isn’t actually required during inference with ed.KLqp. E.g., you can fix its associated sample tensor with something like x = Binomial(..., value=tf.zeros(n_obs, dtype=tf.int32)). Have you tried ed.MAP? It could just be that ed.KLqp is unstable due to use of a Beta variational approximation.

2 Likes

Hey @dustin thank you for the response and tip about value and not needing the _sample_n method for KLqp. MAP for the beta-binomial model seems to run (i.e. no nans) but it doesn’t give good estimates …

'''Estimating binomial success prob using MAP
'''

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import edward as ed
import tensorflow as tf
from edward.models import Beta, Binomial, PointMass

# ---- generate data ----
n = 20
pi_true = .4
n_obs = 100
x_data = np.random.binomial(n=n, p=pi_true, size=n_obs)

# ---- define model ----
pi = Beta(1.0, 1.0)
x = Binomial(n * tf.ones(n_obs), 
             pi * tf.ones(n_obs), 
             value=tf.zeros(n_obs, dtype=tf.float32)) 

# ---- perform inference ----
qpi = PointMass(tf.sigmoid(tf.Variable(tf.random_normal([]))))

inference = ed.MAP({pi: qpi}, data={x: x_data})
inference.run(n_iter=2000)

# ---- true vs posterior mean ----
print('pi_true={}\npi_hat={}'.format(pi_true, qpi.eval()))

with this typical output …

2000/2000 [100%] ██████████████████████████████ Elapsed: 1s | Loss: 243.026
pi_true=0.4
pi_hat=0.0009102463955059648

I also tried running the beta-bernoulli map example but similarly did not get good estimates. Do you have any thoughts for why this would be the case? Thanks!

Joe

1 Like

It’s always worth double-checking arguments of the random variable. Specifying

x = Binomial(total_count=n * tf.ones(n_obs),
             probs=pi * tf.ones(n_obs),
             value=tf.zeros(n_obs, dtype=tf.float32))

fixes it.

2 Likes

I would second that! I lost at least an hour figuring out why I’m getting weird results (I was using Bernoulli, but the same applies to Binomial), and then I found out that the order of the attributes, if you don’t specify the names, is logits then probs. Btw, this weirdness comes from TF, not Edward.
probs-no-probs

3 Likes

@dustin ah I see. Thank you!