Varying mini-batch-size, n_samples and learning rate during training


I want to experiment with different learning rate schedules, batch sizes and Q-model sample sizes during training. For example, I would like to increase batch size and n_samples and decrease the learning rate when ELBO flatlines.

It seems the only way to change n_samples is to initialize a new inference object with the Q-model variables set to the result from the previous inference object. One can set the global_step variable to get an exponential decay learning rate schedule, but for more flexible schedules, one would also need to initialize a new inference object. As for mini-batch size, it seems there is no way around creating new local latent variables.

Initializing a new inference object is inelegant because it also creates other variables that need to be initialized, like the iteration increment. One would either need to go through all variables, check if they are initialized, and initialize them if they aren’t, or else know which variables are created and then initialize them.

Is there an easier way to do this?