Aug 22, 2017 · 1 min read
Many thanks for this article, it helped me to get more familiar with using Edward for inference in Bayesian neural networks.
However, I think the way you are calculating the uncertainty for predictions is incorrect. You are constructing your predictive posterior nn_post with the inferred means qweight.mean() and qbias.mean(), effectively removing their uncertainty. Thus, sampling from nn_post and calculating the moments will always result in the variance you defined for nn (i.e. one in your case for any prediction).
To estimate the true variance of the prediction, one has to do for i = 1…N:
- Sample from qweight and qbias
- With those samples, i.e. {weight: qweight.sample(), bias: qbias.sample()}, produce a sample from nn with the data x you want to make predictions for.
This will give you N samples from the predictive posterior, from which you can calculate the mean and variance.
