Updating non-trainable variables used in batch inference .update() call

Hi there :slight_smile:

I’m experimenting with streaming data and have in each step of my update loop an analytic solution to a set of parameters used by the model (which were previously, successfully, variationally inferred using a normal:softplusnormal klqp pair).

I wish however to use VI to infer other variables for which no analytic solution exists.

Currently, i’m updating the parameters with analytic solution via calls to non-trainable variables affected by tf.assign_sub and tf.assign_add ops, which I call with a separate sess.run() after inference.update(feed_dict=batch_feeddict) call. This means there are two separate sess.run() calls per iteration; however I would like to put the tf.assign_ calls in the same .update() call?

i.e. in variational_inference.py’s def update()
_, t, loss = sess.run([self.train, self.increment_t, self.loss], feed_dict)

should instead be

_, t, loss, _ = sess.run([self.train, self.increment_t, self.loss, custom_op], feed_dict)

Does there already exist a way to do this? Or will I have to customise update()

As a sanity check, is this methodology completely wrong and not recommended?
The current solution (two separate calls to sess.run) appears to be working well

cheers!