I am using Bayesian linear regression y~ N(loc=beta*x+alpha,scale=1)
to make decision, if the mean of the posterior sample of y is smaller than 0, then x is classified to group A, otherwise group B.
I use 5000 posterior samples to estimate the mean, however, this estimation seems to be not very stable. I run the same code several times with different x, and the mean of y of the same x can be either smaller or bigger than 0.
I find this is also related to the inference method. MAP method can usually give more stable estimation of mean, maybe because its estimation of beta and alpha is more stable. However, with KLqp method the estimation of beta and alpha is various in different run.
I haven’t tried logistic regression, but I guess probability of binary classification will similarly fluctuate around 0.5 with difference runs.