DEV Community

Cover image for Why 0.1 + 0.2 Doesn’t Equal 0.3?
Jess Alejo
Jess Alejo

Posted on

Why 0.1 + 0.2 Doesn’t Equal 0.3?

As developers, we often assume that basic arithmetic will always work as expected. For example, adding 0.1 and 0.2 should give us 0.3, right? But when you run this calculation in Ruby (or many other programming languages), you might be surprised by the result:

puts 0.1 + 0.2
# Output: 0.30000000000000004
Enter fullscreen mode Exit fullscreen mode

Why doesn’t 0.1 + 0.2 equal exactly 0.3? Let’s break it down in simple terms.


The Problem: Computers Use Binary

Computers represent numbers in binary (base 2), but some decimal numbers can’t be perfectly represented in binary. For example:

  • The decimal number 0.1 is like 1/3 in base 10—it becomes a repeating fraction in binary.
  • Similarly, 0.2 also has a repeating binary representation.

When you add these imprecise binary representations together, tiny rounding errors occur. That’s why the result of 0.1 + 0.2 ends up being something like 0.30000000000000004 instead of exactly 0.3.


Why Does This Matter?

While the difference between 0.3 and 0.30000000000000004 might seem small, it can cause problems in situations where precision is important. For example:

if 0.1 + 0.2 == 0.3
  puts "Equal"
else
  puts "Not Equal"
end
# Output: Not Equal
Enter fullscreen mode Exit fullscreen mode

This happens because the tiny error makes the numbers not exactly equal.


How to Fix It in Ruby

If you need precise results, here are two simple solutions:

1. Use BigDecimal for Exact Arithmetic

Ruby provides the BigDecimal class, which allows you to perform precise decimal calculations:

require 'bigdecimal'

a = BigDecimal("0.1")
b = BigDecimal("0.2")
result = a + b

puts result.to_s # Output: "0.3"
Enter fullscreen mode Exit fullscreen mode

By using strings to initialize BigDecimal, you avoid the inaccuracies of floating-point numbers.


2. Round the Result

If you don’t need extreme precision, you can round the result to a reasonable number of decimal places:

result = 0.1 + 0.2
rounded_result = result.round(1)

puts rounded_result # Output: 0.3
Enter fullscreen mode Exit fullscreen mode

This approach works well for most everyday cases.


Conclusion

The reason 0.1 + 0.2 doesn’t equal 0.3 is due to how computers represent decimal numbers in binary. While this can lead to unexpected results, tools like BigDecimal or rounding can help you get the answers you expect.

So next time you see 0.30000000000000004, remember—it’s just a quirk of how computers handle numbers!


Happy coding! 🚀

Top comments (1)

Collapse
 
oculus42 profile image
Samuel Rouse

Thanks for posting this. A lot of people are unaware of the details and limitations of IEEE 754 and how it affects simple mathematical operations like adding decimals.

Also important is the precision limit that floating point arithmetic can cause. It can be difficult to understand that a number can be very large or very small but not both at the same time with floating point representation.

123456789.987654321; // 123456789.98765433
Enter fullscreen mode Exit fullscreen mode