# How to access point estimates of parameters

Reading through the linear mixed effects tutorial, it seems that there is no way to access the values of the fixed effects since they are not endowed with priors like the random effects. Is this correct? If not, can someone explain how to obtain the point estimates of these parameters?

You can access their values in the same way you would access random variables. For example, in the tutorial we wrote a model parameter,

``````mu = tf.Variable(tf.random_normal([]))
``````

After inference, simply run

``````sess = ed.get_session()
sess.run(mu)
``````

This fetches `mu` from the graph and returns a NumPy array (the point estimate). Tensors and Variables are deterministic, so fetching always returns the same value (recall for RandomVariables, fetching returns a sample from the distribution).

My underlying question is, can I use this tool to do pure maximum likelihood?

I can get it to work sometimes but not always. I came up with a simpler example of just finding the mean and sd of a univariate gaussian. Here is the code:

``````mu_true = 5
sd_true = 2
N = 100
xtrain = np.random.normal(mu_true,sd_true,size=N)
#edward model
s = tf.Variable(1.0)
m = tf.Variable(0.0)
x = Normal(mu=m*tf.ones(N),sigma=s*tf.ones(N))
mle = ed.Inference({},{x:xtrain})
mle.run()
sess = ed.get_session()
sess.run(m)
``````

expected result: \$m\approx 5\$. Observed result: \$m=0.0\$.

If I put a prior on m and do MAP I can get the result out from the qm approximating PointMass distribution with no issue. Also interestingly, if I put a prior on m, I can get the correct point estimate for â€śsâ€ť using sess.run(s) as expected, but if I donâ€™t put a prior on m, I get the initialized (incorrect) point estimate.

`ed.Inference` is an abstract class. Since it runs nothing, fetching `m` returns its initialized value of 0.0.

You want `ed.MAP({}, data={x: x_train})`. This will give you MLE, because youâ€™re defining model parameters rather than placing prior distributions over them.

1 Like

got it! Thanks again for the help.