Thanks for the correction.
Evident from posts I have made else where in the past, I was aware of the notion that the block period must be significantly greater than network propagation time, else the orphan rate can skyrocket. Essentially the computational power of the network can be subdivided by the network and wasted on numerous forks, thus enabling an attacker to build the longest chain with reduced computational power.
So you’re correct that (significantly less than) the block period should be characterized as a synchrony bound on the network propagation latency.
However, afaics the quoted summary by Vitalik is incorrect to claim that the likely outcome of increased network latency is a reduction in safety from 2f + 1 (to 3f + 1 and so forth), where f is the percentage of the “faulty” (i.e. attacking) computational power. Rather than the (minority computational power) attacker’s fork being accepted as valid, it’s much more likely the network will instead fail to converge on any consensus at all. To convince participants to accept the attacker’s fork as valid, the participants would have to be prevented from observing the other forks which indicate the high orphan rate.
Thus, a probabilistic finality only ever realistically needs (total nodes =) 2f + 1 safety. Deterministic finality requires 3f + 1 safety (where f are the number of faulty nodes).
Tangentially note that network latency is also a factor in selfish mining wherein a minority (25–33%) of the computational power can theoretically amass greater than proportional share of the rewards, yet this is not a reduction of safety. Although in theory over time it enables the attacker to amass more resources and thus perhaps eventually attain 50+% of the computational power.