I would like to use a Poisson distribution in my model. How do I go about doing that?

I’ve tried the following syntax:

Poisson(lam=tf.exp(eta))

TensorFlow implemented Poisson sampling in the TensorFlow r1.1 pre-release (you can install it via `pip install --pre tensorflow`

). This is the most immediate solution. That said, TensorFlow r1.1 made major backwards incompatible changes to distributions, such as a move from greek arguments to english-based arguments. For example, it is now `Poisson(rate=tf.exp(eta))`

and not `lam=`

. To use TensorFlow r1.1 with Edward, see https://github.com/blei-lab/edward/pull/452.

If you prefer not to use a development version of TensorFlow and Edward, there are two other options, described in the Advanced Settings section of http://edwardlib.org/api/model-development. One is to implement your own Poisson sampling. The other option is if you only use the Poisson as a likelihood function (and you use inference algs that don’t require sampling from the likelihood); then use the `value`

argument to fix the associated tensor to some value.

I’m getting “strange” results when using the tf 1.1 implementation; i.e. Poisson(rate=tf.exp(eta))

Can you elaborate on “strange”? For example: Is the model instantiated without error? Are there any specific errors raised in the script? Or do strange results come only during the training?

If it’s the last one, are you using MAP to point mass approximate Poisson variables?

Strange results come only during the training. More specifically I’m getting nan values. This happens when I’m using MAP (ed.MAP).

Can you provide more details, for example, show snippets of your code, and what you’re using the Poisson for in the model? If you’re using MAP to point estimate a parameter with a Poisson prior, I’m not sure how that would work since MAP only works for differentiable latent variables.

NaN values are obtained even if the data generating process is directly dependent on a Poisson distribution. In the following code, I anticipate that the `ed.inferences.MAP`

should give the same result as MLE.

```
from edward.models import Poisson
N = 10
lmb = tf.Variable(0.0, dtype=tf.float32)
x = Poisson(rate=tf.ones(N) * lmb)
inference = ed.MAP(data = {x: X_train})
inference.run()
sess = ed.get_session()
print (sess.run([lmb]))
```

It prints `[nan]`

. What do you suggest should be done in this case?