The stats are not biased. They look at a huge sample set (the largest collection of software projects with issue tracking in the world), and don’t hand-pick projects. Your suggestion would introduce bias by allowing the subjective selection of samples. Using that approach, you could make the study say whatever you want it to.
However, I am unaware of any evidence that static types have a big impact on bug density. I’d love to find that evidence, because giving us better tools to reduce bugs is good for everyone. If you find any such evidence, please share.
In particular, if you know of any evidence where common best practices such as TDD, linting, and code review are included in both the control and test groups, I’d love to see it. I’m unaware of any evidence that static types catch a substantial number of bugs that the other combined measures miss, and that would be big news we could all celebrate.
As for large projects, lots of people think static types are better for large projects, but again, I haven’t seen good evidence to support that assertion. Please link if you know of any, and remember, anecdotes and gut feelings are not good evidence.