Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
This quote from Donald Knuth is now so famous that the phrase “premature optimization” has become well known derogatory term in software development. But what does “premature optimization” actually mean, and why is it generally so bad?
Optimization is considered to be premature when it’s performed before measurements are conducted in order to determine if there are any performance issues in an application. Only after measurements determine that there are indeed performances issues, and also which parts of the application code cause these issues, should optimization be performed. There are two main downsides to optimizing before measuring:
- Wasted time and effort because optimizations of noncritical parts of a program will not have any noticeable impact on overall performance. And since noncritical parts usually outnumber the critical parts, this will likely occur
- Optimized code is often more complex and even more convoluted than code which has not been optimized. Such code is generally more difficult to read and comprehend, and therefore to debug and maintain
Use Small Functions
Reuse Common Code
Use Descriptive Variable Names