How to set up HMC with custom log probability


#1

I’m trying to replicate this pymc3 model in edward2 (model is in the “PyMC implementation” section).

I have this set up so far:

# Given our current guess for lamda and mu, compute the log likelihood at x, tx, T
def pnbd_ll(x, tx, T, lamda, mu):
    log_lamda = tf.log(lamda)
    log_mu = tf.log(mu)
    mu_plus_lamda = lamda + mu
    log_mu_plus_lamda = tf.log(mu_plus_lamda)

    p_1 = x * log_lamda + mu - log_mu_plus_lamda - mu_plus_lamda + tx
    p_2 = (x + 1) * log_lamda - log_mu_plus_lamda - log_mu_plus_lamda * T

    return tf.log(tf.exp(p_1) + tf.exp(p_2))

# Generate samples for lamda and mu, then compute the likelihood at x, tx, T
def pnbd_log_prob(x, tx, T, N):
    r = ed.InverseGamma(concentration=1, rate=1)
    alpha = ed.InverseGamma(concentration=1, rate=5)
    s = ed.InverseGamma(concentration=1, rate=1)
    beta = ed.InverseGamma(concentration=1, rate=5)
    
    lamda = ed.Gamma(concentration=r, rate=alpha, sample_shape=N)
    mu = ed.Gamma(concentration=s, rate=beta, sample_shape=N)
    
    return pnbd_ll(x, tx, T, lamda, mu)

From here, I’m not sure how to set up an HMC sampler with observed x, tx, T data. In edward2 examples I see the use of make_log_joint_fn, but I don’t think I need that here since I already derived log p? I also don’t think I need to create a custom RV subclass because I see several examples where the log p function alone is used. Thank you!