70

I encountered negative zero in output from python; it's created for example as follows:

k = 0.0
print(-k)

The output will be -0.0.

However, when I compare the -k to 0.0 for equality, it yields True. Is there any difference between 0.0 and -0.0 (I don't care that they presumably have different internal representation; I only care about their behavior in a program.) Is there any hidden traps I should be aware of?

4
  • It does not give negative value with python 2.5.4 Commented Nov 3, 2010 at 10:51
  • 1
    The real hidden trap is when you start testing for equality with floating point values. They're inexact and prone to weird round-off discrepancies. Commented Nov 3, 2010 at 21:52
  • But it does print negative value on Python 2.7.1. Commented Mar 4, 2013 at 21:30
  • 3
    This problem came up in a real life gps application; longitude just slightly west of the meridian was being reported as zero degrees and x minutes, when it should have been minus zero degrees and x minutes. But python can't represent integer negative zero. Commented Sep 1, 2016 at 9:20

8 Answers 8

52

Check out −0 (number) in Wikipedia

Basically IEEE does actually define a negative zero.

And by this definition for all purposes:

-0.0 == +0.0 == 0

I agree with aaronasterling that -0.0 and +0.0 are different objects. Making them equal (equality operator) makes sure that subtle bugs are not introduced in the code.
Think of a * b == c * d

>>> a = 3.4
>>> b =4.4
>>> c = -0.0
>>> d = +0.0
>>> a*c
-0.0
>>> b*d
0.0
>>> a*c == b*d
True
>>> 

[Edit: More info based on comments]

When I said for all practical purposes, I had chosen the word rather hastily. I meant standard equality comparison.

As the reference says, the IEEE standard defines comparison so that +0 = -0, rather than -0 < +0. Although it would be possible always to ignore the sign of zero, the IEEE standard does not do so. When a multiplication or division involves a signed zero, the usual sign rules apply in computing the sign of the answer.

Operations like divmod and atan2 exhibit this behavior. In fact, atan2 complies with the IEEE definition as does the underlying "C" lib.

>>> divmod(-0.0,100)
(-0.0, 0.0)
>>> divmod(+0.0,100)
(0.0, 0.0)

>>> math.atan2(0.0, 0.0) == math.atan2(-0.0, 0.0)
True 
>>> math.atan2(0.0, -0.0) == math.atan2(-0.0, -0.0)
False

One way is to find out through the documentation, if the implementation complies with IEEE behavior . It also seems from the discussion that there are subtle platform variations too.

However this aspect (IEEE definition compliance) has not been respected everywhere. See the rejection of PEP 754 due to disinterest! I am not sure if this was picked up later.

See also What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Sign up to request clarification or add additional context in comments.

4 Comments

@aaronasterling: Why did you remove your answer? Thats was a valuable addition to information here. I just upvoted it.
because I was wrong about the last part of it and the rest of it wasn't really unique to my post.
If it's "equal for all purposes" how does it explain the difference in atan2 in Craig McQueen's answer? I agree that it returns True when compared for equality, but if the two numbers' behavior may varies, I would like to know when.
@max Note that the arctangent function is basically looking for the slope (and direction) of the provided arguments, so internally it's dividing by zero leading to discontinuities that should not be surprising. Furthermore, the function output is cyclic with a period of 2π, +π and -π are the "same".
24

math.copysign() treats -0.0 and +0.0 differently, unless you are running Python on a weird platform:

math.copysign(x, y)
     Return x with the sign of y. On a platform that supports signed zeros, copysign(1.0, -0.0) returns -1.0.

>>> import math
>>> math.copysign(1, -0.0)
-1.0
>>> math.copysign(1, 0.0)
1.0

1 Comment

numpy also has a copysign. Yay!
19

It makes a difference in the atan2() function (at least, in some implementations). In my Python 3.1 and 3.2 on Windows (which is based on the underlying C implementation, according to the note CPython implementation detail near the bottom of the Python math module documentation):

>>> import math
>>> math.atan2(0.0, 0.0)
0.0
>>> math.atan2(-0.0, 0.0)
-0.0
>>> math.atan2(0.0, -0.0)
3.141592653589793
>>> math.atan2(-0.0, -0.0)
-3.141592653589793

Comments

14

Yes, there is a difference between 0.0 and -0.0 (though Python won't let me reproduce it :-P). If you divide a positive number by 0.0, you get positive infinity; if you divide that same number by -0.0 you get negative infinity.

Beyond that, though, there is no practical difference between the two values.

22 Comments

You can't divide by 0. If you're talking about talking limits, -0 makes even less sense.
-1 You can't divide a number 0 since you get a ZeroDivisonError. That means that there is no difference.
@Falmarri: In Python, you can't; in other languages, you very well can. I was addressing the distinction between 0.0 and -0.0 in a general floating-point processing sense.
+1 to cancel out the downvotes. Chris is correct that, e.g., in C, floating point division by 0.0 is defined to produce infinity with the sign of (numerator and denominator have same sign) ? positive : negative.
@DMan: It's important that (a) they exist and (b) there's an implementation. (Even if it's partial.) Because you (and I) don't see the complex mathematical subtleties doesn't mean anything. They still exist. I don't understand partial differential equations, and see no practical value. Some people do. I see limited practical value in the standard. That's not the point. My humble opinion on "practical" has no merit. It still exists, and it still has meaning, and it's still partially implemented.
|
5

If you are ever concerned about running into a -0.0 condition, just add + 0. to the equation. It does not influence the results but forces the zeros to a positive float.

import math

math.atan2(-0.0, 0.0)
Out[2]: -0.0

math.atan2(-0.0, 0.0) + 0.
Out[3]: 0.0

Comments

1

Same values, yet different numbers

>>> Decimal('0').compare(Decimal('-0'))        # Compare value
Decimal('0')                                   # Represents equality

>>> Decimal('0').compare_total(Decimal('-0'))  # Compare using abstract representation
Decimal('1')                                   # Represents a > b

Reference :
http://docs.python.org/2/library/decimal.html#decimal.Decimal.compare http://docs.python.org/2/library/decimal.html#decimal.Decimal.compare_total

Comments

0

To generalise or summarise the other answers, the difference in practice seems to come from calculating functions that are discontinued at 0 where the discontinuity is coming from a 0 division. Yet, python defines a 0 division as an error. So if anything is calculated with python operators, you can simply consider -0.0 as +0.0 and nothing to worry from. On the contrary, if the function is calculated by a built in function or a library that is written in another language, such as C, the 0 division may be defined otherwise in that language and may give different answers for -0.0 and 0.0.

Comments

0

Python floats are standard floating-point numbers, except that certain actions are caught on a higher level and raise ZeroDivisionError or OverflowError. Therefore, in pure Python, you cannot see the arguably most prominent difference between positive and negative zero, namely: In standard floating-point arithmetics, you have 1/(−0) = −∞ ≠ ∞ = 1/0.

However, libraries can remove these catches, and use floating-point arithmetics without restrictions by default. This particularly includes NumPy, which in turn is used by many other libraries under the hood. These libraries can use Python floats as they are, preserving the sign of zero. In this case, clear differences between negative zero and positive zero can arise, for example:

import numpy as np

some_numbers = [ 0.0, -0.0, 1.0, -1.0, np.inf, -np.inf ]
inverse = lambda x: 1/np.float64(x)
print(sorted(some_numbers,key=inverse))
# [-0.0, -1, inf, -inf, 1, 0.0]

The practical need for such a behaviour occasionally arises in complex numerical routines. For example when looking for the minimum of a function with sophisticated trial and error, a bad guess may lead to a numerical over- or underflow, but as long as the resulting infinity comes along with the correct sign, the minimiser can continue to work without further ado (rejecting the guess). I give a detailed example for this in this answer.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.