3 years late, but I happened upon this question while researching a CSS/HTML problem. MyMy Haskell is very rusty, but, from what I can see, this is the usual quick-and-easy method of converting a string to a floating-point number. Quick and easy, but not a good method for production code that needs accurate numbers. Each division for each digit in the fraction causes a potential loss of, IIRC, 1/2 of the least-significant bit of the mantissa in accuracy. I'm not familiar with the internals of Unix and Windows libraries, but I remember seeing the microfiche for portions of the VAX/VMS math libraries back in the 1980s and they used, again if I remember correctly, software-implemented 128-bit arithmetic (which was not supported at that time in the hardware) to ensure the accuracy of their results as computations progressed.