Unfortunately there is no golden rule, but the key factor is usability of the data.
Taking an example from my primary expertise, teams often will resist adopting cycle time as a measure for their delivery. Rightfully, they argue that the size of work is often different so cycle time as a measure is inherently inaccurate.
Cycle time however is highly usable. It is also extremely easy to capture (feasible) and quite reliable in what it shows.
For Campaign Monitor the usability of this seemingly inaccurate measure is in setting up a focused learning loop.
We set a target of 5 days for things being delivered and everything about 10 days is treated as an outlier and an opportunity for learning and discussion.
There are also a bunch of other things this enables for us even though the metric is inaccurate.