Hi All,

I am attempting to write an `edward`

inference model that uses factor potentials similar to the `pymc.Potential`

class. The basic idea is to allow you to modify the likelihood somewhat arbitrarily. In `pymc`

this is quite general, however what I am trying to do is quite simple.

Below is a minimal example. The aim is to uniformly sample an N-dimensional sphere by sampling a larger volume and then introducing a term in likelihood that is conditional on the sampled radius. Looking through the source it seemed like I should be able to do this by specifying the potential as data, as shown below, but it doesnβt seem to work:

```
import tensorflow as tf
import edward as ed
import matplotlib.pyplot as plt
import numpy as np
# build a simple model to sample inside an n-dimensional sphere
ndim = 2
x = ed.models.Uniform( low=-1.*tf.ones(ndim), high=1.*tf.ones(ndim) )
r = tf.norm(x) # this could be some complex bounding surface
potential = ed.models.Uniform( low=0., high=1. )
# perform MCMC, attempting to use the potential RV to modify the likelihood
nsamples = 1000
x_jump = ed.models.Uniform( low=-1.*tf.ones(ndim), high=1.*tf.ones(ndim) )
x_samp = ed.models.Empirical( tf.Variable( tf.zeros( (nsamples,ndim))) )
inference = ed.inferences.MetropolisHastings( {x:x_samp}, {x:x_jump}, {potential:r} )
inference.run()
# plot
samples = x_samp.params.eval()
fig,ax = plt.subplots()
ax.plot( samples[:,0], samples[:,1], "o" )
th = np.linspace(0.,2.*np.pi,100)
ax.plot( np.cos(th), np.sin(th), "r-", lw=2. )
1000/1000 [100%] ββββββββββββββββββββββββββββββ Elapsed: 1s | Acceptance Rate: 0.790
```

Itβs interesting to me that the acceptance rate tends to the correct result, pi/4. So why does the empirical RV not reflect that?

Thanks, in advance,

Jim