Thanks for the topic modeling example in examples/deep_exponential_family.py. If I understand the model correctly the main output would be in qW0 (topics to words) and qz3 (documents to topics).
The way I understand it I can get for example the top words (for each topic) from qW0 by ranking them (per row) on their value in qW0 (higher value means more typical for topic). Similarly, I can get the top topics per document by ranking them (per row) in qz3.
It is less clear how I can normalize these values to get “probabilities”? Since the minimum value in these matrices is not 0.0 but slightly higher than that should I first subtract the min value (per row) and then just normalize by the row-wise sum of values?
Thanks for any input on this.