I am trying to build a model to predict the true proportion/rate of a certain value and am wondering if Edward is the right tool. Rather than a bunch of binary data, I have the actual number of attempts and the number of successes out of those. I also have a lot of other features associated with that group.
Some basic stats from many years ago has convinced me that I should model this as a beta-binomial regression (if thats the right term). Fundamentally it seems like there should be some structure per group where if that group has more exposure to attempts then you are more sure about its true probability. At the same time the overall distribution of these features has some shared topology across all groups.
Symbolically I’m hoping for something like this:
p ~ Beta(a, b) = f(feature1, …, featureN) where f is some (non)-linear model.
I don’t understand enough about how probabilistic frameworks work to know if this is possible. The only alternative I can think of that doesn’t involve probabilistic programming is to use a ML algorithm that lets you supply a “weight” parameter.