I'm not sure how "commas inside strings in CSVs can cause bugs" becomes newsworthy, but I guess even the vibecoding generation needs to learn the same old lessons.
"Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes."
If the DMS output isn’t quoting fields that contain commas, that’s technically invalid CSV.
A small normalization step before COPY (or ensuring the writer emits RFC-compliant CSV in the first place) would make the pipeline robust without renaming countries or changing delimiters.
That way, if/when the DMS output is fixed upstream, nothing downstream needs to change.
That's the real shame but also the lesson, a perfectly good and specified format, but the apparent simplicity makes everyone ignore the spec and yolo out broken stuff.
This is why SQL is "broken", it's powerful, simple and people will always do the wrong thing.
Was teaching a class on SQL, half my class was reminding them that examples with concatenating strings was bad and they should use prepared statements (JDBC).
Come practice time, half the class did string concatenations.
This is why I love Linq and the modern parametrized query-strings in JS, they make the right thing easier than the wrong thing.
I also really like the way Androidx's Room handles query parameters and the corresponding APIs.
@Dao
public interface UserDao {
@Query("SELECT * FROM user")
List<User> getAll();
@Query("SELECT * FROM user WHERE uid IN (:userIds)")
List<User> loadAllByIds(int[] userIds);
@Query("SELECT * FROM user WHERE first_name LIKE :first AND " +
"last_name LIKE :last LIMIT 1")
User findByName(String first, String last);
@Insert
void insertAll(User... users);
@Delete
void delete(User user);
}
I really don't understand why people think it's a good idea to use csv. In english settings, the comma can be used as 1000-delimiter in large numbers, e.g. 1,000,000 for on million, in German, the comma is used as decimal place, e.g. 1,50€ for 1 euro and 50 cents. And of course, commas can be used free text fields. Given all that, it is just logical to use tsv instead!
CSV can handle commas in fields just fine (quotes are required in that case). The root problem here is not the format, it's a bug in the CSV exporter used.
I learned to program at 33 or so (in bioinformatics), my first real lesson a couple of days in: "Never ever use csv". I've never used pd.read_csv() without sep="\t". Idk where csv came from, and who thought it was a good idea. It must have been pre-spreadsheet because a tab will put you in the next cell so tabs can simply never be entered into any table by our biologist colleagues.
I guess it's also why all our fancy (as in tsv++?) file types (like GTF and BED) are all tab (or spaces) based. Those fields often have commas in cells for nested lists etc.
I wish sep="\t" was default and one would have to do pd.read/to_tsv(sep=",") for csv. It would have saved me hours and hours of work and idk cross the 79 chars much less often ;)
Funny story: I once bought and started up Galactic Civilizations 3.
It looked horrible, the textures just wouldn't load no matter what I tried. Finally, on a forum, some other user, presumably also from Europe, noted that you have to use decimal point as a decimal separator (my locale uses a comma). And that solved the problem.
It's one of those things where people think, it's there, and it works.
The whole business of software engineering exists in the gap between "it works today on this input" and "it will also work tomorrow and the day after and after we've scaled 10x and rewrote the serialization abstraction and..."
See also: "Glorp 5.7 Turbo one-shot this for me and it works!"
For CSV, I don't know how this comes out. It depends on the library/programming language. It might be 73786976294838210000 or it might throw an exception, or whatever. I'm just saying JSON will not solve your problems neither.
> I really don't understand why people think it's a good idea to use csv.
Because it's easy to understand. Non-technical people understand it. There is tremendous value in that, and that it's underspecified with ugly edge cases doesn't change that.
And you get the under reporting of COVID information in the UK as they passed around CSV files with too many rows for the tools they used.
An interchnage format needs to include information showing that you have all the data - e.g. a hash or the number of rows - or JSON/XML/s-expressions having closing symbols to match the start.
If you snapped your fingers and removed CSVs from the world your lights would go out within the hour and you'd starve within the week. Trillions of dollars in business are done every day with separated values files and excel computations. The human relationships solve the data issues.
This very clearly seems like a bug either in their DMS script, or in the DMS job that they don't directly control, since CSV clearly allows for escaping commas (by just quoting them). Would love to see a bug report being submitted upstream as well as part of the "fix".
CSV quoting is dialect dependent. Honestly you should just never use CSV for anything if you can avoid it, it's inferior to TSV (or better yet JSON/JSONL) and has a tendency to appear like it's working but actually be hiding bugs like this one.
I'd go so far as to say any implementation that doesn't conform to RFC 4180[1] is broken and should be fixed. The vast majority of implementations get this right, it's just that some that don't are so high profile it causes people to throw up their hands and give up.
Unrelated to the fundamental issue (a part of your pipeline generates invalid CSV), I would never store the name of the country like this. The country's name is "The Republic of Moldova" and I would store it like this.
Sure, the most common collation scheme for country names is to sort ignoring certain prefixes like "The Republic of", "The", "People's Democratic...", etc. but this is purely a presentation layer issue (how to order a list of countries to a user) that should be independent of your underlying data.
Sure "hacking" the name of the country like this to make the traditional alphabetical ordering match a particular ordering desired to aid human navigation has a lot of history (encyclopedia indexes, library indexes, record stores, etc.) but that was in the age of paper and physical filing systems.
Store the country name correctly and then provide a custom sort or multiple custom sorts where such functionality belongs - in the presentation layer.
Considering the scope, this could be more easily resolved by just stripping ", Republic of" from that specific string (assuming "Moldova" on its own is sufficient).
I personaly would shy away from binary formats whenever possible. For my column based files i use TSV or the pipe char as delimiter. even excel allowes this files if you include a "del=|" as first line
Sure, but why Moldova of all places? I've seen this form usually for places where there's a dispute for the short name, like Nice/Naughty Korea, Taiwan/West Taiwan, or Macedonia/entitled Greek government.
Come on man. What are we doing here. This is not even anything interesting like Norway being interpreted as False in YAML. This is just a straightforward escaping issue.
"Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes."
If the DMS output isn’t quoting fields that contain commas, that’s technically invalid CSV.
A small normalization step before COPY (or ensuring the writer emits RFC-compliant CSV in the first place) would make the pipeline robust without renaming countries or changing delimiters.
That way, if/when the DMS output is fixed upstream, nothing downstream needs to change.
[1] https://www.rfc-editor.org/rfc/rfc4180.html
This is why SQL is "broken", it's powerful, simple and people will always do the wrong thing.
Was teaching a class on SQL, half my class was reminding them that examples with concatenating strings was bad and they should use prepared statements (JDBC).
Come practice time, half the class did string concatenations.
This is why I love Linq and the modern parametrized query-strings in JS, they make the right thing easier than the wrong thing.
https://news.ycombinator.com/item?id=47229064
I guess it's also why all our fancy (as in tsv++?) file types (like GTF and BED) are all tab (or spaces) based. Those fields often have commas in cells for nested lists etc.
I wish sep="\t" was default and one would have to do pd.read/to_tsv(sep=",") for csv. It would have saved me hours and hours of work and idk cross the 79 chars much less often ;)
And if it’s tab delimited usually people call them tsvs.
It looked horrible, the textures just wouldn't load no matter what I tried. Finally, on a forum, some other user, presumably also from Europe, noted that you have to use decimal point as a decimal separator (my locale uses a comma). And that solved the problem.
The whole business of software engineering exists in the gap between "it works today on this input" and "it will also work tomorrow and the day after and after we've scaled 10x and rewrote the serialization abstraction and..."
See also: "Glorp 5.7 Turbo one-shot this for me and it works!"
Because it's easy to understand. Non-technical people understand it. There is tremendous value in that, and that it's underspecified with ugly edge cases doesn't change that.
An interchnage format needs to include information showing that you have all the data - e.g. a hash or the number of rows - or JSON/XML/s-expressions having closing symbols to match the start.
The "dialect dependent" part is usually about escaping double quotes, new lines and line continuations.
Not a portable format, but it is not too bad (for this use) either considering the country list is mostly static
[1]: https://datatracker.ietf.org/doc/html/rfc4180
Sure, the most common collation scheme for country names is to sort ignoring certain prefixes like "The Republic of", "The", "People's Democratic...", etc. but this is purely a presentation layer issue (how to order a list of countries to a user) that should be independent of your underlying data.
Sure "hacking" the name of the country like this to make the traditional alphabetical ordering match a particular ordering desired to aid human navigation has a lot of history (encyclopedia indexes, library indexes, record stores, etc.) but that was in the age of paper and physical filing systems.
Store the country name correctly and then provide a custom sort or multiple custom sorts where such functionality belongs - in the presentation layer.
Ah, but what _is_ the boundary, asks Transnistria?
It's not the serialisation that is correct, it must be the data.
Lets rename a country because we are not capable of handling commas in data.