The article has good tips, but Unicode normalization is just the tip of the iceberg. It is almost always impossible to do what your users expect without locale information (different languages and locales sort and compare the same graphemes differently). "What do we mean when we say two strings are equal" can be a surprisingly difficult question to answer. It's practical too, not philosophical.
By the way, try looking up the standardized Unicode casefolding algorithm sometime, it is a thing to behold.
in particular, the differences between NFC and NFKC are "fun", and rather meaningful in many cases. e.g. NFC says that "fi" and "fi" are different and not equal, though the latter is just a ligature of the former and is literally identical in meaning. this applies to ffi too. half vs full width Chinese characters are also "different" under NFC. NFKC makes those examples equal though... at the cost of saying "2⁵" is equal to "25".
Grapheme count is not a useful number. Even in a monospaced font, you’ll find that the grapheme count doesn’t give you a measurement of width since emoji will usually not be the same width as other characters.
Frankly, the key takeaway to most problems people run into with Unicode is that there are very, very few operations that are universally well-defined for arbitrary user-provided text. Pretty much the moment you step outside the realm of "receive, copy, save, regurgitate", you're probably going to run into edge cases.
I've said this before and have said it again: Python3 got rid of the wrong string type.
With `bytes` it was obvious that byte length was not the same as $whatever length, and that was really the only semi-common bug (and was mostly limited to English speakers who are new to programming). All other bugs come from blindly trusting `unicode` whose bugs are far more subtle and numerous.
I strongly disagree. Python 2 had no bytes type to get rid of. It had a string type that could not handle code points above U+00FF at all, and could not handle code points above U+007F very well. In addition, Python 2 had a Unicode type, and the types would get automatically converted to each other and/or encoded/decoded, often incorrectly, and sometimes throwing runtime exceptions.
Python 3 introduced the bytes type that you like so much. It sounds like you would enjoy a Python 4 with only a bytes type and no string type, and presumably with a strong convention to only use UTF-8 or with required encoding arguments everywhere.
In both Python 2 and Python 3, you still have to learn how to handle grapheme clusters carefully.
By the way, try looking up the standardized Unicode casefolding algorithm sometime, it is a thing to behold.
in particular, the differences between NFC and NFKC are "fun", and rather meaningful in many cases. e.g. NFC says that "fi" and "fi" are different and not equal, though the latter is just a ligature of the former and is literally identical in meaning. this applies to ffi too. half vs full width Chinese characters are also "different" under NFC. NFKC makes those examples equal though... at the cost of saying "2⁵" is equal to "25".
language is fun!
Fortunately you can usually outsource this to a UI toolkit which can do it.
UnicodeDecodeError
With `bytes` it was obvious that byte length was not the same as $whatever length, and that was really the only semi-common bug (and was mostly limited to English speakers who are new to programming). All other bugs come from blindly trusting `unicode` whose bugs are far more subtle and numerous.
Python 3 introduced the bytes type that you like so much. It sounds like you would enjoy a Python 4 with only a bytes type and no string type, and presumably with a strong convention to only use UTF-8 or with required encoding arguments everywhere.
In both Python 2 and Python 3, you still have to learn how to handle grapheme clusters carefully.
https://docs.python.org/3/library/stdtypes.html#binary-seque...
"The core built-in types for manipulating binary data are bytes and bytearray."