Before delving into code, it’s useful to reason about what your model is and what it means to infer hidden structure from your model via a posterior distribution.
In your example, you wrote a Bayesian neural network with likelihood y given x, and with weight and bias parameters W_0,W_1,b_0,b_1. You then set up data xdata and ydata.
To perform inference means to calculate a distribution over the weights and biases given the data. In particular, with something like HMC, you should set up Empirical distributions that each approximate W_0,W_1,b_0,b_1 respectively. For more background, I recommend the pages linked to in http://edwardlib.org/tutorials/.