When I try out the logistic regression sample code which only involves Bernoulli distribution, I am able to get excellent recovery of the simulation parameter values as long as the data is not sparse or if the parameters values are not very small (1st decimal place, not 2nd). But when the data is sparse, the larger parameters still come out close, but the smaller parameters all get bumped by an order of magnitude, so that they become in 1st decimal place. This is true even if I keep adding on more sample data to 10’s of thousands. Is this a known weakness for VI?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
|
Resolve logistic regression parameters on simulated data
|
20 | 2056 | April 28, 2018 | |
|
Where is VI applied in the real world
|
1 | 827 | May 13, 2018 | |
|
Biased variance estimates when using KLqp for Bayesian Linear Regression
|
7 | 1364 | February 9, 2018 | |
|
Binomial Logistic Regression with multiple features
|
1 | 1207 | October 31, 2017 | |
|
Why my KLqp cannot infer good parameters on simple iris regression task
|
0 | 881 | January 24, 2018 |