Confused by error message from for LDA with KLqp

I’m just getting started setting up Bayesian models in Edward so this may be a dumb question. I’ve been working through the tutorials with no problems and now I’m trying to set up Latent Dirichlet Allocation in Edward. I’m following the data structure used in the Stan manual example for LDA, which uses two single, long vectors listing token ids and the associated document number rather than the typical sparse matrix representation.
Everything seems to go fine until I try to run the inference step, at which point I get an error message that I don’t know how to interpret.

Here’s my attempt to code that model:

doc   = tf.placeholder(tf.int32, N)

theta = Dirichlet(tf.zeros([M,K]) + 1/K)
phi   = Dirichlet(tf.zeros([K,V]) + 1/V)

z   = Categorical(tf.gather(theta, tf.gather(doc, tf.range(N))))
w   = Categorical(tf.gather(phi,   tf.gather(z,   tf.range(N))))

Here’s the variational model and the KLqp inference:

qz     = Empirical(params=tf.Variable(tf.zeros([1, N], dtype=tf.int32)))
qw     = Empirical(params=tf.Variable(tf.zeros([1, N], dtype=tf.int32)))
qtheta = Empirical(params=tf.Variable(tf.zeros([1, M, K], dtype=tf.float32) + 1/K))
qphi   = Empirical(params=tf.Variable(tf.zeros([1, K, V], dtype=tf.float32) + 1/V))

its = 500
inference = ed.KLqp({z: qz, w: qw, theta: qtheta, phi: qphi}, data={w: w_, doc: doc_}), n_print=100, n_samples=10)

I seem to be missing something basic here. Here’s the error message I get from

>>>, n_print=100, n_samples=10)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/kpf/venv/edward/src/edward/edward/inferences/", line 116, in run
    self.initialize(*args, **kwargs)
  File "/home/kpf/venv/edward/src/edward/edward/inferences/", line 86, in initialize
    return super(KLqp, self).initialize(*args, **kwargs)
  File "/home/kpf/venv/edward/src/edward/edward/inferences/", line 73, in initialize
    self.loss, grads_and_vars = self.build_loss_and_gradients(var_list)
  File "/home/kpf/venv/edward/src/edward/edward/inferences/", line 127, in build_loss_and_gradients
    return build_reparam_loss_and_gradients(self, var_list)
  File "/home/kpf/venv/edward/src/edward/edward/inferences/", line 375, in build_reparam_loss_and_gradients
    qz_copy = copy(qz, scope=scope)
  File "/home/kpf/venv/edward/src/edward/edward/util/", line 232, in copy
    new_rv = type(rv)(*args, **kwargs)
TypeError: __init__() missing 1 required positional argument: 'params'

Any idea what might be going on here? I’m also getting errors from other inference methods so my guess is I’ve just set something up wrong in the model.

1 Like

As far I understand from

One should use Empirical with SGLD (stochastic gradient Langevin dynamics) not for inference using KL divergence. You have to explicitly specify the variational parameters there similar to they have done in the data-subsampling example.

I agree with @suderoy. You (typically) have to use a parametric distribution for variational inference.

That specific error occurs when trying to copy an Empirical random variable. That’s unexpected behavior which I’m fixing in a pull request (#658).

Thanks @suderoy and @dustin. That explains it!

Related Github issues:

Did you solve it?
Why is it not possible to estimate Categorical distribution?