Skip to main content
Removed statement that IntPtr does not have an alias. The alias `nint` is shown in the chart.
Source Link
Neil
  • 7.4k
  • 5
  • 46
  • 43

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

`nint` is now an alias for `IntPtr`
Source Link
Justine Krejcha
  • 2.2k
  • 1
  • 30
  • 38

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

Just for the sake of completeness, here's a brain dump of related information...

As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is:

bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
nint:    System.IntPtr
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16

Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr.

In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.)

There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance:

public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid

That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference.

Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes.

Improve linting
Source Link
Samuel RIGAUD
  • 1.7k
  • 1
  • 19
  • 28
object:  System.Object
string:  System.String
bool:    System.Boolean
byte:    System.Byte
sbyte:   System.SByte
short:   System.Int16
ushort:  System.UInt16
int:     System.Int32
uint:    System.UInt32
long:    System.Int64
ulong:   System.UInt64
float:   System.Single
double:  System.Double
decimal: System.Decimal
char:    System.Char
bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid
object:  System.Object
string:  System.String
bool:    System.Boolean
byte:    System.Byte
sbyte:   System.SByte
short:   System.Int16
ushort:  System.UInt16
int:     System.Int32
uint:    System.UInt32
long:    System.Int64
ulong:   System.UInt64
float:   System.Single
double:  System.Double
decimal: System.Decimal
char:    System.Char
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid
bool:    System.Boolean
byte:    System.Byte
char:    System.Char
decimal: System.Decimal
double:  System.Double
float:   System.Single
int:     System.Int32
long:    System.Int64
object:  System.Object
sbyte:   System.SByte
short:   System.Int16
string:  System.String
uint:    System.UInt32
ulong:   System.UInt64
ushort:  System.UInt16
public enum Foo : UInt32 {} // Invalid
public enum Bar : uint   {} // Valid
Notice removed Recommended answer in AudioBubble by whimsea
Notice added Recommended answer in AudioBubble
added 160 characters in body
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
Fixed spelling of language-neutral and usage of which vs that
Source Link
Dennis VW
  • 3.3k
  • 1
  • 20
  • 39
Loading
Clarified conjunction. Was about to change "no-one" to "no one", too, but then noticed Brits sometimes hyphenate: https://english.stackexchange.com/q/8741/17165
Source Link
Edward Brey
  • 42k
  • 21
  • 214
  • 268
Loading
added 1 character in body
Source Link
Dave Zych
  • 21.9k
  • 7
  • 53
  • 67
Loading
added 360 characters in body
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
added 8 characters in body
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
Rollback to Revision 6
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
added 2 characters in body
Source Link
Chintan Khetiya
  • 16.2k
  • 9
  • 53
  • 85
Loading
convert code tags
Source Link
crthompson
  • 15.9k
  • 6
  • 64
  • 87
Loading
Change format to Code Sample
Source Link
Olrac
  • 1.5k
  • 2
  • 10
  • 22
Loading
formatted table as fixed width font
Source Link
Kevin Panko
  • 8.6k
  • 19
  • 52
  • 64
Loading
Post Made Community Wiki by Inverted Llama
Post Merged (destination) from stackoverflow.com/questions/215255/string-vs-string-in-c-sharp
deleted 56 characters in body
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
added 329 characters in body
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading
Source Link
Jon Skeet
  • 1.5m
  • 892
  • 9.3k
  • 9.3k
Loading