I'm sorry if I ignored previous comments. I get a lot of comments on these pieces, and I don't always have time to respond to them. I also find it difficult to respond via Medium.
I disagree with your starting premise. Voting is not a statistical sampling exercise. Voting predates the notion of sampling by several hundred years. The only thing which connects voting and sampling is the idea that the set of people that actually vote is commonly smaller than the set of people who are eligible to vote, in just the same way that the sample is smaller than the population. If voting were a statistical sampling exercise, then its validity would depend on the quality of the sample. A better constructed sample would trump the results of the vote. We don't do this. There is no sense in which a well constructed opinion poll would be able to defeat or trump the results of an election. We know that the set of people who actually vote is not a random sample of the population, but we continue to accept voting as a legitimate means of collective decision-making (though we may of course believe that it would be better if the voting population resembled the population at large).
You make a series of points about the need for the threshold for change to be greater than 50%. Many people have argued for this at different points in time. The main objection to adopting such a threshold is that it privileges the status quo, and there is no societal equivalent of an innocuous null hypothesis. Even if you were able to show that the referendum ought to have had a supermajority threshold, the fact is that it did not. It is difficult to argue that the referendum is illegitimate by imposing an additional requirement after the fact.