How to reset the seed in edward?

In my code, I try to call a function in a loop as below. The first iteration if fine, but in the second iter, there is an error: RuntimeError: Seeding is not supported after initializing part of the graph. Ple
ase move set_seed to the beginning of your code.

In the function foo, I want to train edward model with a given random seed and I hope the result is repeatable, so that I need to reset the seed in each function call. Is there any simple way to achieve this?

def foo(x):
ed.set_seed(x)
###train some edward model
…

if name == β€˜main’:
x=[1,2,3,4]
for ic in range(4):
foo(x[ic])

Seeding has several nuances. Have you looked into the documentation on operation-level vs graph-level seeds? It sounds like you want to create a new session at each call.

def foo(x):
  sess = tf.InteractiveSession()
  with sess.as_default():
    ...  # train some edward model
2 Likes

Thank you for your reply, dustin. But where should I put the set seed statement? I have attached a sampel code below. If ed.set_seed(0) is used the same error araised: RuntimeError: Seeding is not supported after initializing part of the graph. Please move set_seed to the beginning of your code.
If I use tf.set_random_seed(0), the resutls of the two inferences are different.

import edward as ed
from edward.models import Normal
import tensorflow as tf
import numpy as np
from importlib import reload


##To generate sample data
def build_toy_dataset(N, w,b, noise_std=0.1):
	np.random.seed(0)
	D = len(w)
	x = np.random.randn(N, D)
	y = np.dot(x, w) +b  #+ np.random.normal(0, noise_std, size=N)
	print('x[0]',x[0])
	print('y[0]',y[0])
	return x, y
###end of function build_toy_dataset


##function to calibrate Bayesian linear regression
def BLR(N,D,X_train, y_train):
	#ed.set_seed(0) 
	#tf.set_random_seed(0)
	sess = tf.InteractiveSession()
	with sess.as_default():
		X = tf.placeholder(tf.float32, [N, D])
		w = Normal(loc=tf.zeros(D), scale=tf.ones(D))
		b = Normal(loc=tf.zeros(1), scale=tf.ones(1))

		y = Normal(loc=ed.dot(X, w) + b, scale=tf.ones(1))

		qw = Normal(loc=tf.Variable(tf.random_normal([D])),
					scale=tf.nn.softplus(tf.Variable(tf.random_normal([D]))))
		qb = Normal(loc=tf.Variable(tf.random_normal([1])),
					scale=tf.nn.softplus(tf.Variable(tf.random_normal([1]))))

		inference = ed.KLqp({w: qw, b: qb}, data={X: X_train, y: y_train})
		inference.run(n_samples=2, n_iter=1000)
	return qw.mean().eval(),qb.mean().eval()
##end of BLR

##The main function
N = 100  # number of data points
D = 3  # number of features
noise_std=0.2/np.sqrt(50)

w_true=np.array([0.3,-0.2,0.1])
b_true=np.array([0.2])

X_train, y_train = build_toy_dataset(N, w_true,b_true,noise_std)
qw_list=[];qb_list=[];
for ic in range(2):
	#tf.set_random_seed(0)
	#ed.set_seed(0)
	qw,qb=BLR(N,D,X_train, y_train)
	qw_list.append(qw)
	qb_list.append(qb)

for ic in range(2):
	print(qw_list[ic])###I hope to get the same qw here

The function foo should not build any new nodes in the TensorFlow graph. Rather, you should (1) set the graph seed with ed.set_seed; (2) build the model + inference graph (all the way up to inference.initialize). Then in a loop, call foo. foo will do something like

def foo(inference):
  sess = tf.InteractiveSession()
  with sess.as_default():
    tf.global_variables_initializer().run()
    for _ in range(inference.n_iter):
      info_dict = inference.update()
      inference.print_progress(info_dict)

Thank you dustin. But sorry I did not make my problem clear. What I am actually trying to do is model selection. I give different set of hyerparameters and pick up the set that gives best performance. So I hope the model is exactly repeatable. What I expect is something like this:
for ic in range(len(hyperparameters)):
ed.set_seed(0)
performance[ic]=foo(hyperparameters[ic]) ###some function to buid and train Edward model

If, for example, performance[0] is the best one, I can exactly rebuild the Edward model by doing:
ed.set_seed(0)
foo(hyperparameters[0])

Further I hope the model selection process can run in parallel. Parallel in multi core CPU should be good enough, but GPU may be better.

You should be able to tweak the above to do this. Define placeholders for your hyperparameters, e.g., hyperparam_ph. Then inside foo feed the hyperparameter as part of the feed_dict during training, inference.update({hyperparam_ph: 1e-2}).

Thank you. But what if the graph is also dependent on the hyperparameters? For example, the number of independent variables in linear regression or the number of layer of neural network and the number of neurons in each layer.

Is it possible to rebuild a graph in function foo during the iteration? I have tried something like this:

for ic in range(len(hyperparameters)):
ed.set_seed(0)
performance[ic]=foo(hyperparameters[ic]) ###some function to buid and train Edward model
tf.reset_default_graph() ##I hope to rebuild the graph in each iteration so that

But this will give an error like this:
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor(β€œdata/Placeho
lder:0”, shape=(100,), dtype=float32) is not an element of this graph.

Maybe I can solve this problem by system call a python script

system call works. The following code can run on multi-core CPU for model selection. The random seed and other hyperparameters can be passed as arguments to the script TrainSingleEdwardModel.py. ed.set_seed() is called in TrainSingleEdwardModel.py

import os
from multiprocessing import Pool

def MyCommand(hyperparameters):
command="python TrainSingleEdwardModel.py "
command+=str(hyperparameters)
os.system(command)

if name == β€˜main’:
hyperparameters=list(range(32))
#for ic in range(len(hyperparameters)):
# MyCommand(hyperparameters[ic])
cores=16
with Pool(cores) as p:
p.map(MyCommand, hyperparameters)

MyCommand(hyperparameters[4])
1 Like