Unicode Footguns in Python

(pythonkoans.substack.com)

40 points | by meander_water 13 days ago

7 comments

  • renhanxue 20 hours ago
    The article has good tips, but Unicode normalization is just the tip of the iceberg. It is almost always impossible to do what your users expect without locale information (different languages and locales sort and compare the same graphemes differently). "What do we mean when we say two strings are equal" can be a surprisingly difficult question to answer. It's practical too, not philosophical.

    By the way, try looking up the standardized Unicode casefolding algorithm sometime, it is a thing to behold.

    • Groxx 20 hours ago
      the normalization doc is interesting too imo: https://unicode.org/reports/tr15/

      in particular, the differences between NFC and NFKC are "fun", and rather meaningful in many cases. e.g. NFC says that "fi" and "fi" are different and not equal, though the latter is just a ligature of the former and is literally identical in meaning. this applies to ffi too. half vs full width Chinese characters are also "different" under NFC. NFKC makes those examples equal though... at the cost of saying "2⁵" is equal to "25".

      language is fun!

  • dhosek 20 hours ago
    Grapheme count is not a useful number. Even in a monospaced font, you’ll find that the grapheme count doesn’t give you a measurement of width since emoji will usually not be the same width as other characters.
    • paulddraper 19 hours ago
      Grapheme count (or rather, indexing) is necessary to do text selection or cursor positions.

      Fortunately you can usually outsource this to a UI toolkit which can do it.

    • Spivak 19 hours ago
      For certain use-cases, but it's not like any of the other usual notions of text length are any better for what you want.
      • lmm 16 hours ago
        If all possible notions of length are footguns, maybe there should be no default "length" operation available.
  • OkayPhysicist 20 hours ago
    Frankly, the key takeaway to most problems people run into with Unicode is that there are very, very few operations that are universally well-defined for arbitrary user-provided text. Pretty much the moment you step outside the realm of "receive, copy, save, regurgitate", you're probably going to run into edge cases.
  • naIak 20 hours ago
    I’m going to trigger some ptsd with this…

    UnicodeDecodeError

  • morshu9001 20 hours ago
    Unicode footguns, in Python
  • o11c 17 hours ago
    I've said this before and have said it again: Python3 got rid of the wrong string type.

    With `bytes` it was obvious that byte length was not the same as $whatever length, and that was really the only semi-common bug (and was mostly limited to English speakers who are new to programming). All other bugs come from blindly trusting `unicode` whose bugs are far more subtle and numerous.

    • Flimm 14 hours ago
      I strongly disagree. Python 2 had no bytes type to get rid of. It had a string type that could not handle code points above U+00FF at all, and could not handle code points above U+007F very well. In addition, Python 2 had a Unicode type, and the types would get automatically converted to each other and/or encoded/decoded, often incorrectly, and sometimes throwing runtime exceptions.

      Python 3 introduced the bytes type that you like so much. It sounds like you would enjoy a Python 4 with only a bytes type and no string type, and presumably with a strong convention to only use UTF-8 or with required encoding arguments everywhere.

      In both Python 2 and Python 3, you still have to learn how to handle grapheme clusters carefully.

    • seanhunter 2 hours ago
      Python 3 didn't get rid of bytes though. If you want to manipulate data as bytes you absolutely can do that.

      https://docs.python.org/3/library/stdtypes.html#binary-seque...

      "The core built-in types for manipulating binary data are bytes and bytearray."