I obtain (randomly) the following message using inference with KLqp

InvalidArgumentError (see above for traceback): Nan in summary histogram for: gradient/posterior/qmu_loc/0
[[Node: gradient/posterior/qmu_loc/0 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](gradient/posterior/qmu_loc/0/tag, gradients/AddN_27)]]

I’ve read something about decreasing the learning rate, but I don’t know how to put this option inside of inference:

Hi Dustin! Many thanks! I had changes in the results.

Now I want understand why using a learning_rate = 1e-3 we obtain Loss: nan, but using learning_rate = 1e-2, we obtain Loss 18073.822. I’ll check the bibliography at Classes of Inference:

KLqp supports

1. score function gradients (Paisley et al., 2012) 2. reparameterization gradients (Kingma and Welling, 2014)