I have a (beginners) question about using Bayesian neural networks for classification. In the tutorial on Bayesian neural networks on the website, the output of the neural network is fed into a Gaussian random variable like this:

```
y = Normal(loc=neural_network(x),scale=tf.ones(K) * 0.1)
```

and subsequently

```
inference = ed.KLqp({W_0: qW_0, b_0: qb_0,
W_1: qW_1, b_1: qb_1}, data={y: y_train})
```

For my particular example, I want the network to classify images from 10 different categories. I tried making a categorical distribution like this:

```
y = Categorical(logits=neural_network(x),dtype=tf.int64)
```

However, when I feed that into the feed_dict in the inference step, I get an error because the shapes don’t match up. My variable y now has shape (1,) while the y_train data have shape (10,) (A one out of 10 representation). This makes me wonder what the correct way is in Edward of using a Bayesian neural network for classification.