This post isn’t fair, the statistics by bugs that you show are biased.

The stats are not biased. They look at a huge sample set (the largest collection of software projects with issue tracking in the world), and don’t hand-pick projects. Your suggestion would introduce bias by allowing the subjective selection of samples. Using that approach, you could make the study say whatever you want it to.

I am aware of the advantages of static types. I like them and use them. I have even founded a project to better express types and function signatures in JavaScript.

However, I am unaware of any evidence that static types have a big impact on bug density. I’d love to find that evidence, because giving us better tools to reduce bugs is good for everyone. If you find any such evidence, please share.

In particular, if you know of any evidence where common best practices such as TDD, linting, and code review are included in both the control and test groups, I’d love to see it. I’m unaware of any evidence that static types catch a substantial number of bugs that the other combined measures miss, and that would be big news we could all celebrate.

As for large projects, lots of people think static types are better for large projects, but again, I haven’t seen good evidence to support that assertion. Please link if you know of any, and remember, anecdotes and gut feelings are not good evidence.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.