19

Here is a simple code in Xcode 7.3.1 playground:

var str = "8.7" print(Double(str))

the output is suprising: Optional(8.6999999999999993)

also, Float(str) gives: 8.69999981

Any thoughts or reasons on this guys? Any references to this would be appreciated.

Also, how should I then convert "8.7" to 8.7 as Double (or Float)?

Edit

in swift:

(str as NSString).doubleValue returns 8.7

Now, that is Ok. But my question, still, does not get a complete answer. We have found an alternative but why can we not rely on Double("8.7"). Please, give a deeper insight on this.

Edit 2

("6.9" as NSString).doubleValue // prints 6.9000000000000004

So, the question opens up again.

9
  • use print(Double(str)!) Commented Sep 29, 2016 at 12:30
  • 4
    If your question is about 8.7 vs 8.69999981 then you should read Is floating point math broken? and What Every Computer Scientist Should Know About Floating-Point Arithmetic. Commented Sep 29, 2016 at 12:35
  • "Double-precision floating-point format is a computer number format that occupies 8 bytes (64 bits) in computer memory and represents a wide, dynamic range of values by using a floating point.", therefore you must specify the "precision" of the value ( precision is the number of digits in a number ). Commented Sep 29, 2016 at 12:44
  • 1
    @l'L'l, Martin but why would converting "8.7" into Double not lead to 8.7 but 8.69999999999991. What is the point in doing that? And if that is really to do at hardware level then, Swift should have handled it. (like it converts it to 8.7 for .doubleValue but not for Double("8.7")) Commented Sep 29, 2016 at 12:52
  • 2
    This question turned out to be more interesting than I thought initially ... Commented Sep 29, 2016 at 18:21

2 Answers 2

27

There are two different issues here. First – as already mentioned in the comments – a binary floating point number cannot represent the number 8.7 precisely. Swift uses the IEEE 754 standard for representing single- and double-precision floating point numbers, and if you assign

let x = 8.7

then the closest representable number is stored in x, and that is

8.699999999999999289457264239899814128875732421875

Much more information about this can be found in the excellent Q&A Is floating point math broken?.


The second issue is: Why is the number sometimes printed as "8.7" and sometimes as "8.6999999999999993"?

let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)

let x = 8.7
print(x) // 8.7

Is Double("8.7") different from 8.7? Is one more precise than the other?

To answer these questions, we need to know how the print() function works:

  • If an argument conforms to CustomStringConvertible, the print function calls its description property and prints the result to the standard output.
  • Otherwise, if an argument conforms to CustomDebugStringConvertible, the print function calls is debugDescription property and prints the result to the standard output.
  • Otherwise, some other mechanism is used. (Not imported here for our purpose.)

The Double type conforms to CustomStringConvertible, therefore

let x = 8.7
print(x) // 8.7

produces the same output as

let x = 8.7
print(x.description) // 8.7

But what happens in

let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)

Double(str) is an optional, and struct Optional does not conform to CustomStringConvertible, but to CustomDebugStringConvertible. Therefore the print function calls the debugDescription property of Optional, which in turn calls the debugDescription of the underlying Double. Therefore – apart from being an optional – the number output is the same as in

let x = 8.7
print(x.debugDescription) // 8.6999999999999993

But what is the difference between description and debugDescription for floating point values? From the Swift source code one can see that both ultimately call the swift_floatingPointToString function in Stubs.cpp, with the Debug parameter set to false and true, respectively. This controls the precision of the number to string conversion:

  int Precision = std::numeric_limits<T>::digits10;
  if (Debug) {
    Precision = std::numeric_limits<T>::max_digits10;
  }

For the meaning of those constants, see http://en.cppreference.com/w/cpp/types/numeric_limits:

  • digits10 – number of decimal digits that can be represented without change,
  • max_digits10 – number of decimal digits necessary to differentiate all values of this type.

So description creates a string with less decimal digits. That string can be converted to a Double and back to a string giving the same result. debugDescription creates a string with more decimal digits, so that any two different floating point values will produce a different output.


Summary:

  • Most decimal numbers cannot be represented exactly as a binary floating point value.
  • The description and debugDescription methods of the floating point types use a different precision for the conversion to a string. As a consequence,
  • printing an optional floating point value uses a different precision for the conversion than printing a non-optional value.

Therefore in your case, you probably want to unwrap the optional before printing it:

let str = "8.7"
if let d = Double(str) {
    print(d) // 8.7
}

For better control, use NSNumberFormatter or formatted printing with the %.<precision>f format.

Another option can be to use (NS)DecimalNumber instead of Double (e.g. for currency amounts), see e.g. Round Issue in swift.

Sign up to request clarification or add additional context in comments.

2 Comments

could you please, take a look at the last edit in the question.
@v-i-s-h-a-l: ("6.9" as NSString).doubleValue prints "6.9" in my test. But even if it didn't: You cannot rely on a certain output, unless you use a number formatter.
3

I would use:

let doubleValue = NSNumberFormatter().numberFromString(str)?.doubleValue

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.