Credible intervals remain constant

I am not sure if this is making any sense or I just did something plain wrong in the code. My dataset has the following inputs and outputs, stacked horizontally

X_train_complete

Y_train_complete

The problem is that even when I try a recursive learning approach, instead of showing it all of the data together, the credible intervals remain pretty much constant throughout. As I understand it, the neural network part is working well, but the probabilistic is not. Since all of the output curves end up at zero, should it not get more certain as it gets closer to zero? Moreover not only do they not get more narrow, but after adding inference on two more sigmas they are very very wide even if I run the program for lots of iterations. Here’s an example

KLqp_after_5_batteries

My question is, is this a sanity check that shows me that I have done something wrong or while being bad results, they are not illogical? Link to code for the curious

Cheers