Skip to main content
added 342 characters in body
Source Link
Robert Harvey
  • 200.7k
  • 55
  • 470
  • 683

Premature Optimization is the root of all evil.

As Donald Knuth said,

PrematureProgrammers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil (or at least most of it). Yet we should not pass up our opportunities in programmingthat critical 3%.

Regardless of language, write your code to be as clear as you can. If looping twice really results in clearer code, go ahead and write that. A descent compiler may refactor away the second loop, or the overhead may be bottlenecked by other areas of your program. Unless your software is very widely used, it's likely that the time your successor spends trying to decipher a quick but complicated block of code will be more expensive than the fractionally increased runtime.

If, after testing, you find that a given program is running unacceptably slow, a refactor to the more complex means may be appropriate. But these situations are often difficult to identify in advance, because of this uncertainty more than anything else, you should err on the side of clarity over "performance" whenever such are in contention.

Premature Optimization is the root of all evil.

As Donald Knuth said,

Premature optimization is the root of all evil (or at least most of it) in programming.

Regardless of language, write your code to be as clear as you can. If looping twice really results in clearer code, go ahead and write that. A descent compiler may refactor away the second loop, or the overhead may be bottlenecked by other areas of your program. Unless your software is very widely used, it's likely that the time your successor spends trying to decipher a quick but complicated block of code will be more expensive than the fractionally increased runtime.

If, after testing, you find that a given program is running unacceptably slow, a refactor to the more complex means may be appropriate. But these situations are often difficult to identify in advance, because of this uncertainty more than anything else, you should err on the side of clarity over "performance" whenever such are in contention.

Premature Optimization is the root of all evil.

As Donald Knuth said,

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Regardless of language, write your code to be as clear as you can. If looping twice really results in clearer code, go ahead and write that. A descent compiler may refactor away the second loop, or the overhead may be bottlenecked by other areas of your program. Unless your software is very widely used, it's likely that the time your successor spends trying to decipher a quick but complicated block of code will be more expensive than the fractionally increased runtime.

If, after testing, you find that a given program is running unacceptably slow, a refactor to the more complex means may be appropriate. But these situations are often difficult to identify in advance, because of this uncertainty more than anything else, you should err on the side of clarity over "performance" whenever such are in contention.

Source Link
DougM
  • 6.4k
  • 1
  • 19
  • 34

Premature Optimization is the root of all evil.

As Donald Knuth said,

Premature optimization is the root of all evil (or at least most of it) in programming.

Regardless of language, write your code to be as clear as you can. If looping twice really results in clearer code, go ahead and write that. A descent compiler may refactor away the second loop, or the overhead may be bottlenecked by other areas of your program. Unless your software is very widely used, it's likely that the time your successor spends trying to decipher a quick but complicated block of code will be more expensive than the fractionally increased runtime.

If, after testing, you find that a given program is running unacceptably slow, a refactor to the more complex means may be appropriate. But these situations are often difficult to identify in advance, because of this uncertainty more than anything else, you should err on the side of clarity over "performance" whenever such are in contention.