Scaling batch inference when batch size varies

Hi,

I am trying to fit a Gaussian Process model with global parameters to several individual trajectories. The data set is quite large, so this seems to be a good idea to use batch learning (KLqp for length scale + variance essentially).

I followed the example in http://edwardlib.org/tutorials/batch-training and it is running (note: For my data set I initially got errors about some variables not being initialized which was resolved by manually executing the steps in the inference.run() method).

I feed the training data for each individual trajectory separately so the number of observations varies substantially. Do I need to worry about scaling and if so, how do I adjust the scaling factor for each batch individually?