The consequences of this “gaming,” especially if it is done by algorithms rather than humans, needs to be given a closer attention. Consider the following (from this post.)
if I churn out random false models, a certain portion of them will pass as false positives. If I publish only those models that passed the threshold, in the extreme case all of them may be false.
In this example, the algorithm is not “gaming” the process, per se — just sampling in highly biased fashion. But, in the end, the consequence is the same.