It’s one of the arguments to the optimizer. For example,
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
inference.run(optimizer=optimizer)
You can find more doc in TensorFlow’s tf.train.Optimizer API.
Zero acceptance rate implies the sample is always being rejected. You should lower the step_size to avoid too large proposals. Tuning it alongside the number of leapfrog steps, the following gets good results. Note I also took the mean over the last 1000 samples and ignored the first 1000.
import edward as ed
import numpy as np
import tensorflow as tf
from edward.models import (
Empirical,
Normal,
NormalWithSoftplusScale,
)
MU = 6.0
SIGMA = 1.5
N = 1000
# DATA
y_train = np.random.normal(MU, SIGMA, [N])
# MODEL
mu = Normal(loc=0.0, scale=5.0)
inv_softplus_sigma = Normal(loc=0.0, scale=1.0)
y = NormalWithSoftplusScale(loc=mu, scale=inv_softplus_sigma, sample_shape=N)
q_mu = Empirical(params=tf.Variable(tf.random_normal([2000])))
q_inv_softplus_sigma = Empirical(params=tf.Variable(tf.random_normal([2000])))
# INFERENCE
inference = ed.HMC({mu: q_mu, inv_softplus_sigma: q_inv_softplus_sigma},
{y: y_train})
inference.run(step_size=0.003, n_steps=5)
print(tf.reduce_mean(q_mu.params[1000:]).eval())
print(tf.nn.softplus(tf.reduce_mean(q_inv_softplus_sigma.params[1000:])).eval())
On my Laptop’s CPU, this returns
2000/2000 [100%] ██████████████████████████████ Elapsed: 8s | Acceptance Rate: 0.999
6.01639
1.50348