Are two mixture models with shared categorical distribution dependent/independent?

Hej,
I’d like two random variables, one that samples from a mixture of Gaussians and along with that a dependent OneHotCategorical random variable that represents that individual Gaussian from the mixture of Gaussians.
Let’s give it a try and first build a mixture of Gaussians:

from edward.models import Normal, Categorical, Mixture, Dirichlet, OneHotCategorical

mu = np.array([0.,1.,2.,3.], dtype=np.float32)
k=4
n=1000
d = Dirichlet(np.ones(k, dtype=np.float32))
c = Categorical(probs = d, sample_shape=(n))
components = [Normal(loc=mu[kk], scale=1., sample_shape=(n)) for kk in range(k)]
x = Mixture(cat=c, components=components, sample_shape=(n))

and another mixture of “deterministic” OneHotCategorical random variables

oneHotProbs = np.eye(k, k, dtype=np.float32)
oneHotProbs[oneHotProbs==0]=-1000
oneHotProbs[oneHotProbs==1]=1000
oneHotComponents = [OneHotCategorical(logits=oneHotProbs[kk,:], sample_shape=(n)) for kk in range(k)]
y = Mixture(cat=c, components=oneHotComponents, sample_shape=(n))

Are x and y coherently using the identical samples from c?
Is there any way to check this? Or is there a simpler way to do this?

Cheers, Rasmus

To test:

sess = ed.get_session()
x_cat_sample, y_cat_sample = sess.run([x.cat, y.cat])
assert x_cat_sample == y_cat_sample

For a more end-to-end test, define the Normal’s in x with well-separated modes. For example, set scale to be 1e-8 and set loc to 0, 1, 2, and 3.

I predict both will pass. c is a Categorical random variable, which specifically means it is a Categorical TensorFlow Distribution associated with a sample tensor c* ~ p(c) in the TensorFlow graph (c.value). This sample tensor is the same in both x and y because c is the same in both x and y.

Actually, the first test will pass but the second test will fail. x and y have a sample tensor (x.value and y.value). These tensors are given by calling {x,y}.sample(): its implementations newly draws a value from the Categorical random variable:

Thank you. I will give it a try.
That the second test fails implicates that a Neural Network cannot be trained on classifying x with y as training output. Since x and y are sampled independently. Right?
Any ideas or suggestions how to create a construction as the above, but with x and y being dependent on each other?
How could this be implemented, I’d love to give it a try.