6

Does the standard guarantee that functions return the exact same result across all implementations?

Take for example pow(float,float) for 32bit IEEE floats. Is the result across all implementations identical if the same two floats are passed in?

Or is there some flexibility that the standard allows with regard to tiny differences depending on the algorithm used to implement pow?

4
  • The std does not mandate float and double to be IEEE-754 types. They could be machine-specific types, with different ranges and precision. Commented Oct 18, 2015 at 18:22
  • 1
    If you have two machines that use the same representation for floats and doubles, and with the same settings regarding rounding, I don't know the answer. In all other cases, the answer is no, they can give different results. Commented Oct 18, 2015 at 18:25
  • also two different compilers on same machine that obeys ieee754. If std says yes answer should be same then it would have to define what correct answer is. Commented Oct 18, 2015 at 18:35
  • Related: Does any floating point-intensive code produce bit-exact results in any x86-based architecture?: Summary: no in C, but yes in asm. Commented Dec 8, 2016 at 14:44

1 Answer 1

18

No, the C++ standard doesn't require the results of cmath functions to be the same across all implementations. For starters, you may not get IEEE-754/IEC 60559 floating point arithmetic.

That said, if an implementation does use IEC 60559 and defines __STDC_IEC_559__, then it must adhere to Annex F of the C standard (yes, your question is about C++, but the C++ standard defers to the C standard for C headers like math.h). Annex F states:

  • The float type matches the IEC 60559 single format.
  • The double type matches the IEC 60559 double format.
  • The long double type matches an IEC 60559 extended format, else a non-IEC 60559 extended format, else the IEC 60559 double format.

Further, it says normal arithmetic must follow the IEC 60559 standard:

  • The +, , *, and / operators provide the IEC 60559 add, subtract, multiply, and divide operations.

It further requires sqrt to follow IEC 60559:

  • The sqrt functions in <math.h> provide the IEC 60559 square root operation.

It then goes on to describe the behavior of several other floating-point functions, most of which you probably aren't interested in for this question.

Finally, it gets to the math.h header, and specifies how the various math functions (i.e. sin, cos, atan2, exp, etc.) should handle special cases (i.e. asin(±0) returns ±0, atanh(x) returns a NaN and raises the "invalid" floating-point exception for |x| > 1, etc.). But it never nails down the exact computation for normal inputs, which means you can't rely on all implementations producing the exact same computation.

So no, it doesn't require these functions to behave the same across all implementations, even if the implementations all define __STDC_IEC_559__.


This is all from a theoretical perspective. In practice, things are even worse. CPUs generally implement IEC 60559 arithmetic, but that can have different modes for rounding (so results will differ from computer to computer), and the compiler (depending on optimization flags) might make some assumptions that aren't strictly standards conforming in regards to your floating point arithmetic.

So in practice, it's even less strict than it is in theory, and you're very likely to see two computers produce slightly different results at some point or another.


A real world example of this is in glibc, the GNU C library implementation. They have a table of known error limits for their math functions across different CPUs. If all C math functions were bit-exact, those tables would all show 0 error ULPs. But they don't. The tables show there is indeed varying amounts of error in their C math functions. I think this sentence is the most interesting summary:

Except for certain functions such as sqrt, fma and rint whose results are fully specified by reference to corresponding IEEE 754 floating-point operations, and conversions between strings and floating point, the GNU C Library does not aim for correctly rounded results for functions in the math library[...]

The only things that are bit-exact in glibc are the things that are required to be bit-exact by Annex F of the C standard. And as you can see in their table, most things aren't.

Sign up to request clarification or add additional context in comments.

9 Comments

Thanks for detailed answer. So to guarantee same result one would need to write their own pow function.
Pretty much. In fact, for situations where bit-exactness is important (like in audio and video codecs), it's not uncommon to implement your own fixed-point arithmetic.
What do you mean by normal inputs in your sentence "But it never nails down the exact computation for normal inputs"?
@Zboson: I mean inputs that shouldn't produce any error or need any special handling. For example, for the function acos, 0.5 would be considered a normal input because arccosine is well defined for that input value. But the input 1 needs special handling, because it results in the value zero and IEC 60559 has two zeros: +0 and -0. Annex F states acos(1) should yield +0, not -0. But Annex F doesn't talk about acos(0.5) because that "normal" input value doesn't need any special handling. Does that clear things up?
@Zboson: Indeed, you cannot rely on acos(0.5) being the exact same value with different compilers. Heck, you can't even rely on it being the same with the same compiler on different architectures. glibc has a nice table listing the ULP error of their math functions, and as you can see, these math functions are not bit-exact across all architectures.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.