How to make the standard deviation of the dependent variable to be inferred?

In the linear regression model y= x_coff*X+ sigma+N(0,sigma^2), pymc3 can make sigma to be inferred as in

However, in the tutorial of Edward, I found most of the standard deviation of dependent variable is set to be a constant 1. I have tried the code below to infer sigma together, but the estimated coefficients are not correct. Anything wron in my code?

import edward as ed
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import tensorflow as tf

from edward.models import Normal, Empirical,Chi2,Uniform
import time

def build_toy_dataset(N, w,b, noise_std=0.1):
D = len(w)
x = np.random.randn(N, D)
y =, w) +b+ np.random.normal(0, noise_std, size=N)
return x, y

N = 100 # number of data points
D = 3 # number of features
w_true = np.random.randn(D)
b_true = np.random.randn(1)

X_train, y_train = build_toy_dataset(N, w_true,b_true,noise_std)
#X_test, y_test = build_toy_dataset(N, w_true,b_true)

X = tf.placeholder(tf.float32, [N, D])
w = Normal(loc=tf.zeros(D), scale=tf.ones(D))
b = Normal(loc=tf.zeros(1), scale=tf.ones(1))


y = Normal(, w) + b, scale=sigma)

qw = Normal(loc=tf.Variable(tf.random_normal([D])),
qb = Normal(loc=tf.Variable(tf.random_normal([1])),

inference = ed.KLqp({w: qw, b: qb,sigma:qsigma}, data={X: X_train, y: y_train}), n_iter=4000)

chi sq and gamma do not look good choice. see this

Thank you ecosang. Though it is a painful process to implement a log normal distribution in the open source code, it really works. Now the results is mush better.

Hi Edison,

      Do you mind share your code here? I have written a KLqp with standard deviation as dependent variables but the code failed to infer correctly. Here is how I define the variable:

qsigma = ed.models.TransformedDistribution(
    distribution=ed.models.Normal(loc=1., scale=0.1),