Timeline for Why Unicode Encoding/Decoding is Necessary in JavaScript
Current License: CC BY-SA 4.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 24, 2018 at 17:12 | comment | added | candied_orange | @Deduplicator encoding in ASCII is always encoding in UTF-8. The reverse isn't always true. And encoding is not just about the number of bytes used to encode. EBCDIC uses the same number of bytes as ASCII but still thinks of the bits in a completely different way. | |
| Jul 24, 2018 at 15:49 | comment | added | Deduplicator | Well, it doesn't matter whether I think ASCII or UTF-8, unless it isn't valid in both. That's the beauty of different schemes sometimes having some overlap not only in representable ideas, but even in their representation. In this specific case, it's intentional: One of the design-criteria for UTF-8 was making it one of the myriad extended-ASCII-charsets in existence, with all the advantages that entails. | |
| Jul 24, 2018 at 11:38 | history | edited | candied_orange | CC BY-SA 4.0 |
added 21 characters in body
|
| Jul 24, 2018 at 1:21 | history | answered | candied_orange | CC BY-SA 4.0 |