How to implement self-defined metric/loss function?

I see there are lots of useful metrics in the such as mse or mae, but I am try to implement my own loss function (ie. logcosh in Keras), is there any method we can invoke the self-defined functions from a seperate python module rather than modifying the itself?


That’s a great question. You can’t at the moment, but like Keras it makes sense to allow an API such as

def custom_func(y_true, y_pred):
  return # some scalar tf.Tensor

ed.evaluate(['log_likelihood', custom_func], data={x_post: x_train})

This is a simple change in the for loop of the evaluate function. Contribution welcome.

EDIT: Added issue (

I am very happy if I could join the team for the contribution, but I am new
to github, how can I help?

I recommend understanding how to use pull requests. Edward’s contributing guide might also be helpful.

I think the solution would have to be slightly more complicated. I looked at the implementation of, there are checks in there to see if the metric is supervised or not. (only because of log_likelihood) If custom metrics are to be added, there will be a flag to check whether it is supervised or unsupervised. This is unless of course, the implicit assumption if your solution is valid for most use cases.

As far I can see, this feature has been implemented here:

But this is only for Does it need to be implemented also in inferences?

I am doing a regression for several timesteps using LSTM and would like to be able to weigth different timesteps differently in the loss function. Similar to this:

Would this be possible?