Skip to main content
added 90 characters in body
Source Link
kaya3
  • 22.4k
  • 50
  • 139

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.) To be clear, here 0..100 is a type, not a value; the values are integers in that range.

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.)

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.) To be clear, here 0..100 is a type, not a value; the values are integers in that range.

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

Became Hot Network Question
added 53 characters in body
Source Link
kaya3
  • 22.4k
  • 50
  • 139

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.)

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.)

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada. They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

An integer range type like 0..100 means that a value can be any integer in that range. Arithmetic operations on range types produce other range types, for example if a: 10..20 and b: 5..7 then a + b: 15..27. (Each range here includes both endpoints, for the sake of the example.)

Checking if a number is in a range can be treated as a fallible type conversion, e.g. converting x to type 0..100 could return an optional value which is None if x is not in that range. Alternatively, it can be treated as a type predicate, so that if(x in 0..100) { ... } narrows the type of x within the conditional branch.

Range types are used in a few languages, notably Ada (which only has integer types with defined ranges). They have some benefits, such as eliding bounds-checking for arrays of known length, avoiding overflows by using wide enough integers for any operation, and preventing division-by-zero errors. They also seem relatively easy to support, unlike dependent types which are more powerful and can represent types like 0..n where n isn't a compile-time constant. So what are the reasons why most statically-typed languages don't support integer range types?

edited tags
Link
The Thonnu
  • 1.6k
  • 2
  • 7
  • 31
Source Link
kaya3
  • 22.4k
  • 50
  • 139
Loading