I’ll make an exception to my rule on not discussing individual data points on this one occasion. You write about USHCN data: “The graph I use that I think establishes this to the level of 99.99999% change of political tampering is this”.
This analysis is based on the assumption that changes to data should be randomly distributed. But why should that be true? Why should there not be systematic errors in data? My microwave clock is systematically fast; when I adjust it, they adjustments are always in the same direction. There is no reason that any data set should be assumed to contain only random errors.
Where do you want to take this game from here? I know you want to be heard; it is less clear that you want to listen. Do you want to dig in deep on how your 99.99999% estimate is inherently flawed (of course systemic bias creeps in >0.000001% of the time)?