Gaussian mixture with covariance matrix via klqp method

Hello, I’ve been away from this problem for a while.

GMM is known as solved by varaitional inference.
But, in many cases, the update of lower bound is given in a close form such as http://scikit-learn.org/stable/modules/dp-derivation.html . And I guess this is the reason it works.

Mixture problem might be difficult in KLQP approach since it is almost black box algorithm.
By the way, when I use ADVI in STAN, it gave a fairly better result. Guess you can improve this with the transformation of distribution? But, I am not very familiar with how this works in detail.