Implementing a factor potential in MCMC inference

Hi All,

I am attempting to write an edward inference model that uses factor potentials similar to the pymc.Potential class. The basic idea is to allow you to modify the likelihood somewhat arbitrarily. In pymc this is quite general, however what I am trying to do is quite simple.

Below is a minimal example. The aim is to uniformly sample an N-dimensional sphere by sampling a larger volume and then introducing a term in likelihood that is conditional on the sampled radius. Looking through the source it seemed like I should be able to do this by specifying the potential as data, as shown below, but it doesn’t seem to work:

import tensorflow as tf
import edward as ed
import matplotlib.pyplot as plt
import numpy as np

# build a simple model to sample inside an n-dimensional sphere

ndim = 2

x = ed.models.Uniform( low=-1.*tf.ones(ndim), high=1.*tf.ones(ndim) )
r = tf.norm(x)                # this could be some complex bounding surface

potential = ed.models.Uniform( low=0., high=1. )

# perform MCMC, attempting to use the potential RV to modify the likelihood

nsamples = 1000

x_jump = ed.models.Uniform( low=-1.*tf.ones(ndim), high=1.*tf.ones(ndim) )
x_samp = ed.models.Empirical( tf.Variable( tf.zeros( (nsamples,ndim))) )

inference = ed.inferences.MetropolisHastings( {x:x_samp}, {x:x_jump}, {potential:r} )
inference.run()

# plot

samples = x_samp.params.eval()

fig,ax = plt.subplots()
ax.plot( samples[:,0], samples[:,1], "o" )

th = np.linspace(0.,2.*np.pi,100)
ax.plot( np.cos(th), np.sin(th), "r-", lw=2. )

1000/1000 [100%] β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ Elapsed: 1s | Acceptance Rate: 0.790

circle

It’s interesting to me that the acceptance rate tends to the correct result, pi/4. So why does the empirical RV not reflect that?

Thanks, in advance,

Jim

In case anyone ever cares about this, here is my solution:

In the example above r is not a parent of potential in the model graph and so sampled values of r are not correctly fed into the call to potential.log_prob(r) inside the MetropolisHastings.update method. The solution I have come up with is to define a new RandomVariable in which the log_prob_ method is evaluated on a node that is explicitly given as a parent variable rather than passed in at evaluation time:

class FactorPotentialDistribution(Distribution):

  def __init__(self, logp, name="FactorPotential" ):

    self.logp = logp
    parameters = locals()
    super(FactorPotentialDistribution, self).__init__(
                dtype = self.logp.dtype,
                reparameterization_type = FULLY_REPARAMETERIZED,
                validate_args = True,
                allow_nan_stats = False,
                parameters = parameters,
                graph_parents = [self.logp],
                name=name,
              )

  def _log_prob( self, value):
    return self.logp

  def _sample_n( self, n, seed=None  ):
    raise NotImplementedError("sample_n is not implemented")

def __init__(self, *args, **kwargs):
  RandomVariable.__init__(self, *args, **kwargs)

_name = 'FactorPotential'
_candidate = FactorPotentialDistribution
__init__.__doc__ = _candidate.__init__.__doc__
_params = {'__doc__': _candidate.__doc__,
       '__init__': __init__,
       }

FactorPotential = type(_name, (RandomVariable, _candidate), _params)

Which seems to work with the replacements:

potential = FactorPotential( logp=tf.log(tf.where( r<=1.,1.,0.)), value=0. )
...
inference = ed.inferences.MetropolisHastings( {x:x_samp}, {x:x_jump}, {potential:0.} )

Note that the value argument is required to initialize the underlying RandomVariable since I haven’t implemented FactorPotential._sample_n and the value given to potential in the data specification is not actually used.

1 Like