Softmax logit start to take on value nan after first iteratioin


#1

I have a model which worked well for two previous datasets. It involved feeding a multivariate logit into a softmax. When I started the training, the first logProb came out as very small numbers, but did not go out of range.

returned combined_prob[[3.7668025322495853e-81 2.2536902189765332e-83][5.1639261639860367e-81 3.0895938365906408e-83][5.6208860329797729e-81 3.3629944139766141e-83][6.1139865878104518e-81 3.6580180813654438e-83][6.4315076637965463e-81]…]
returning log_final_prob = [[-189.70496 -194.82378][-189.38948 -194.50832][-189.30469 -194.42352][-189.2206 -194.33943][-189.16997]…]

Then when the second iterations started, the logit became all nan
x_logits=[0][[[[nan nan][nan nan]][[nan nan][nan nan]][

When I looked at the results from the prev data sets, the logProb is also small but bigger than this
returned combined_prob[[1.3671217001712804e-64 1.0386060698097945e-65][5.1544762073173032e-65 3.9154561505711869e-66][7.343496514211793e-67 5.5782213630433148e-68][1.54969592578499e-67 1.1771704259268e-68][1.3523815348803295e-64]…]
returning log_final_prob = [[-151.43477 -154.01218][-152.41019 -154.98772][-156.66141 -159.23894][-158.21716 -160.79469][-151.4456]…]

Is the difference in logProb the cause of the problem? that this dataset finally pushed the computation out of range? is there a way to adjust the model to stay within range?

I tried inference.run(optimizer=tf.train.AdamOptimizer(learning_rate=4e-4)), made no difference