Skip to main content
3 of 4
added 53 characters in body
kaya3
  • 22.4k
  • 50
  • 139

Why don't many languages have integer range types?

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.)

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

kaya3
  • 22.4k
  • 50
  • 139