Scaling batch inference when batch size varies



I am trying to fit a Gaussian Process model with global parameters to several individual trajectories. The data set is quite large, so this seems to be a good idea to use batch learning (KLqp for length scale + variance essentially).

I followed the example in and it is running (note: For my data set I initially got errors about some variables not being initialized which was resolved by manually executing the steps in the method).

I feed the training data for each individual trajectory separately so the number of observations varies substantially. Do I need to worry about scaling and if so, how do I adjust the scaling factor for each batch individually?