Creating and running an edward model in a loop without memory leak

I have been working a project where I would like to build an edward model and train it in a loop, starting from scratch in each loop iteration. However I have noticed a memory leak when I attempt to do this; the memory use of the program increases monotonically with the loop iteration.

Here is a very simple example using edward:

import os
import psutil

import edward as ed
import numpy as np
import tensorflow as tf

process = psutil.Process(os.getpid())

memory = []
for i in range(15):
    tf.reset_default_graph()
    a = ed.models.Normal(loc=tf.zeros(1), scale=5.0 * tf.ones(1))
    y = ed.models.Normal(loc=a, scale=tf.ones(10000), )
    inference = ed.MAP([a], data={y: np.random.normal(size=10000)})
    inference.run(n_iter=100)
    ed.get_session().close()
    memory.append(process.memory_info().rss)

And here is the memory use of the process with respect to the loop iteration:
download-2

Does anyone know what is being stored in memory that is not being released when we close the tensorflow session or reset the graph? Are these tensorflow objects or edward objects that are being persisted?

Hey, did you figure out how to solve this? I’m facing the same issue. I think tf does not have this issue, so it is the edward inference and objects that are not released.