Skip to main content
deleted 5 characters in body
Source Link
Philip JF
  • 2.2k
  • 1
  • 15
  • 10

There are several arguments for lazy evaluation I think are compelling

  1. Modularity With lazy evaluation you can break code up into parts. For example, suppose you have the problem to "find the first ten reciprocals of elements in a list list such that the reciprocals are less than 1." In something like Haskell you can write

    take 10 . filter (<1) . map (1/)
    

but this is just incorrect in a strict language, since if you give it [2,3,4,5,6,7,8,9,10,11,12,0] you will be dividing by zero. See sacundim's answer for why this mattersis awesome in practice

  1. More things work Strictly (pun intended) more programs terminate with non strict evaluation than with strict evaluation. If your program terminates with an "eager" evaluation strategy it will terminate with a "lazy" one, but the oposite is not true. You get stuff like infinite data structures (which are really only kinda cool) as specific examples of this phenomenon. More programs work in lazy languages.

  2. Optimality Call-by-need evaluation is asymptotically optimal with respect to time. Although the major lazy languages (that essentially being Haskell and Haskell) don't promise call-by-need, you can more or less expect an optimal cost model. Strictness analysers (and speculative evaluation) keep the overhead down in practice. Space is a more complicated matter.

  3. Forces Purity using lazy evaluation makes dealing with side effects in an undisciplined way a total pain, because as you put it, the programmer loses control. This is a good thing. Referential transparency makes programming, refractoring, and reasoning about programs so much easier. Strict languages just inevitably cave to the pressure of having impure bits--something Haskell and Clean have resisted beautifully. This is not to say that side effects are always evil, but controlling them is so useful that this reason alone is enough to use lazy languages.

There are several arguments for lazy evaluation I think are compelling

  1. Modularity With lazy evaluation you can break code up into parts. For example, suppose you have the problem to "find the first ten reciprocals of elements in a list list such that the reciprocals are less than 1." In something like Haskell you can write

    take 10 . filter (<1) . map (1/)
    

but this is just incorrect in a strict language, since if you give it [2,3,4,5,6,7,8,9,10,11,12,0] you will be dividing by zero. See sacundim's answer for why this matters awesome in practice

  1. More things work Strictly (pun intended) more programs terminate with non strict evaluation than with strict evaluation. If your program terminates with an "eager" evaluation strategy it will terminate with a "lazy" one, but the oposite is not true. You get stuff like infinite data structures (which are really only kinda cool) as specific examples of this phenomenon. More programs work in lazy languages.

  2. Optimality Call-by-need evaluation is asymptotically optimal with respect to time. Although the major lazy languages (that essentially being Haskell and Haskell) don't promise call-by-need, you can more or less expect an optimal cost model. Strictness analysers (and speculative evaluation) keep the overhead down in practice. Space is a more complicated matter.

  3. Forces Purity using lazy evaluation makes dealing with side effects in an undisciplined way a total pain, because as you put it, the programmer loses control. This is a good thing. Referential transparency makes programming, refractoring, and reasoning about programs so much easier. Strict languages just inevitably cave to the pressure of having impure bits--something Haskell and Clean have resisted beautifully. This is not to say that side effects are always evil, but controlling them is so useful that this reason alone is enough to use lazy languages.

There are several arguments for lazy evaluation I think are compelling

  1. Modularity With lazy evaluation you can break code up into parts. For example, suppose you have the problem to "find the first ten reciprocals of elements in a list list such that the reciprocals are less than 1." In something like Haskell you can write

    take 10 . filter (<1) . map (1/)
    

but this is just incorrect in a strict language, since if you give it [2,3,4,5,6,7,8,9,10,11,12,0] you will be dividing by zero. See sacundim's answer for why this is awesome in practice

  1. More things work Strictly (pun intended) more programs terminate with non strict evaluation than with strict evaluation. If your program terminates with an "eager" evaluation strategy it will terminate with a "lazy" one, but the oposite is not true. You get stuff like infinite data structures (which are really only kinda cool) as specific examples of this phenomenon. More programs work in lazy languages.

  2. Optimality Call-by-need evaluation is asymptotically optimal with respect to time. Although the major lazy languages (that essentially being Haskell and Haskell) don't promise call-by-need, you can more or less expect an optimal cost model. Strictness analysers (and speculative evaluation) keep the overhead down in practice. Space is a more complicated matter.

  3. Forces Purity using lazy evaluation makes dealing with side effects in an undisciplined way a total pain, because as you put it, the programmer loses control. This is a good thing. Referential transparency makes programming, refractoring, and reasoning about programs so much easier. Strict languages just inevitably cave to the pressure of having impure bits--something Haskell and Clean have resisted beautifully. This is not to say that side effects are always evil, but controlling them is so useful that this reason alone is enough to use lazy languages.

Source Link
Philip JF
  • 2.2k
  • 1
  • 15
  • 10

There are several arguments for lazy evaluation I think are compelling

  1. Modularity With lazy evaluation you can break code up into parts. For example, suppose you have the problem to "find the first ten reciprocals of elements in a list list such that the reciprocals are less than 1." In something like Haskell you can write

    take 10 . filter (<1) . map (1/)
    

but this is just incorrect in a strict language, since if you give it [2,3,4,5,6,7,8,9,10,11,12,0] you will be dividing by zero. See sacundim's answer for why this matters awesome in practice

  1. More things work Strictly (pun intended) more programs terminate with non strict evaluation than with strict evaluation. If your program terminates with an "eager" evaluation strategy it will terminate with a "lazy" one, but the oposite is not true. You get stuff like infinite data structures (which are really only kinda cool) as specific examples of this phenomenon. More programs work in lazy languages.

  2. Optimality Call-by-need evaluation is asymptotically optimal with respect to time. Although the major lazy languages (that essentially being Haskell and Haskell) don't promise call-by-need, you can more or less expect an optimal cost model. Strictness analysers (and speculative evaluation) keep the overhead down in practice. Space is a more complicated matter.

  3. Forces Purity using lazy evaluation makes dealing with side effects in an undisciplined way a total pain, because as you put it, the programmer loses control. This is a good thing. Referential transparency makes programming, refractoring, and reasoning about programs so much easier. Strict languages just inevitably cave to the pressure of having impure bits--something Haskell and Clean have resisted beautifully. This is not to say that side effects are always evil, but controlling them is so useful that this reason alone is enough to use lazy languages.