This is more conceptual learning question, once we give prior to the weights and run inference engine(Variational Inference/MAP etc). How does the inference update the weights value. Is there a back-propagation like sgd etc involved underneath the inference engine? I was thinking prior and likelihood produces posterior and latter that posterior change into prior and continuously get updated and refined, is it? Also I was able to make 1 layer(1 weight and bias) NN work, can I cascade multiple layer to make it run. Also does normal cnn for classification can be done in edward? And final thing is , is there some code or tutorial to use pymc3 with edward?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
|
Use Edward within a tensorflow deep network using standard component
|
1 | 1309 | March 13, 2017 | |
|
Examples/bayesian_nn.py
|
3 | 917 | September 13, 2018 | |
|
Bayesian RNN in Edward
|
0 | 1515 | March 13, 2019 | |
|
Clarification expectation maximization
|
0 | 985 | November 14, 2017 | |
|
Parameter Learning with Simple Bayesian Network; PyMC3 vs. Edward; Edward posteriors not converging around correct parameter values
|
3 | 2894 | March 27, 2018 |