L2 regularization of weights

I’m trying to understand how can we use the regularization with Edward models. I’m not very much familiar with tensorflow. Consider the model below,

# prior

# likelihood

# posterior
loc_qw = tf.get_variable("qw/loc", [d, c])
scale_qw = tf.nn.softplus(tf.get_variable("qw/scale", [d, c]))
qw = Normal(loc=loc_qw, scale=scale_qw)

# inference 
inference = ed.KLqp({w: qw, b: qb}, data={X:train_X, y:train_y})

I notice that Edward uses regularization losses in the loss function.
loss = -(p_log_lik - kl_penalty - reg_penalty)

However, I can’t figure out how to apply the regularization losses to the Edward model. How can we add L1 or L2 regularization to the above model?


Possibly I just answered your question on stackoverflow (I answered a very similar question!)

The normal prior on w is the Bayesian analogue to L2 regularization when optimizing parameters.

1 Like

Thanks, I think I asked the wrong question. I knew that Normal prior is equivalent to the l2 regularization. Imaging if the prior is not normal and if we want to regularize the parameters that we are trying the estimate.

I found that this could be done using the regularizer param of the tf variables in the posterior.

loc_qw = tf.get_variable("qw/loc", [d, c], regularizer=tf.contrib.layers.l2_regularizer(reg_scale) )