Likelihood Free Variational Inference (Implicitqp function) help

First off I want to thank Dustin Tran and all the other contributors for writing Edward. This has not only saved me so much time but allowed me to learn and apply models that I don’t have a full grasp on that I otherwise wouldn’t be able to use.

have a model that I can calculate the likelihood analytically, however, I would like to parameterize the variational family with a neural network for flexibility. The model has no local latent variables so I will not include them in my model. The problem I’m running into is that I only have one data point (its a graph but not important as I’m using whats equivalent to the Boltzmann distribution of some “sufficient” statistics of the graph (i.e. an ERGM)). Typically AFAIK, the implicitqp function uses the adversarial density estimator on Eq[log(q(\theta))- log(p(\theta, X))] by applying the ratio estimator on q(\theta)q(X) versus p(p\theta,X), where q(X) is the empirical distribution of the data. Because I only have one data point I want to rewrite the KL divergence problem as Eq[log(q(\theta))-log(p(\theta))]-Eq[log(p(X|\theta))]. I would like to use the ratio estimator on the first term (Eq[log(q(\theta))-log(p(\theta))]) where I can generate as many samples as I would like from q and then calculate the second term due to my analytical formula (the ERGM). Is there a way to do this with Edward (like in implicitqp)? In the implicitqp tutorial, it seems to suggest that the discriminator needs to take in the data, which I would like to avoid for obvious reasons. Thanks for your help,

Cameron