I am on the receiving end: I have to parse CSV generated by various (very expensive, very complicated) eCAD software packages. And it's often garbage. Those expensive software packages trip on things like escaping quotes. There is no way to recover a CSV line that has an unescaped double quote.
I can't point to a strict spec and say "you are doing this wrong", because there is no strict spec.
Then there are the TSV and semicolon-Separated V variants.
Did I mention that field quoting was optional?
And then there are banks, which take this to another level. My bank (mBank), which is known for levels of programmer incompetence never seen before (just try the mobile app) generates CSVs that are supposed to "look" like paper documents. So, the first 10 or so rows will be a "letterhead", with addresses and stuff in various random columns. Then there will be your data, but they will format currency values as prettified strings, for example "34 593,12 USD", instead of producing one column with a number and another with currency.
I used to be a data analyst at a Big 4 management consultancy, so I've seen an awful lot of this kind of thing. One thing I never understood is the inverse correlation between "cost of product" and "ability to do serialisation properly".
Free database like Postgres? Perfect every time.
Big complex 6-figure e-discovery system? Apparently written by someone who has never heard of quoting, escaping or the difference between \n and \r and who thinks it's clever to use 0xFF as a delimiter, because in the Windows-1252 code page it looks like a weird rune and therefore "it won't be in the data".
"Enterprise software" has been defined as software that is purchased based on the decisions of people that will not use it. I think that explains a lot.
> Big complex 6-figure e-discovery system? Apparently written by someone who has never heard of quoting...
It's because about a certain size, system projects are captured by the large consultancy shops, who eat the majority of the price in profit and management overhead...
... and then send the coding work to a lowest-cost someone who has never heard of quoting, etc.
And it's a vicious cycle, because the developers in those shops that do learn and mature quickly leave for better pay and management.
(Yes, there's usually a shit hot tiger team somewhere in these orgs, but they spend all their time bailing out dumpster fires or landing T10 customers. The average customer isn't getting them.)
Try to live in a country where "," is the decimal point. Of course this causes numerous interoperability issues or hidden mistakes in various data sets.
There would have been many better separators... but good idea to bring formatting into it as well...
There was a long period of my life that I thought .csv meant cemicolon separated because all I saw was cemicolon separated files and I had no idea of the pain.
Not sure if they still do this, but Klarna would send us ", " separated files. If there wasn't a space after the comma then it was to be read as a decimal point. Most of the CSV parser don't/didn't allow you to specify multi-character separators. In the end I just accepted that we had one field for krona and for öre and most fields would need to have a leading space removed.
There are better separators included in ASCII, but not used as often: 28 File Separator, 29 Group Separator, 30 Record Separator and 31 Unit Separator.
> Then there will be your data, but they will format currency values as prettified strings, for example "34 593,12 USD", instead of producing one column with a number and another with currency.
To be fair, that's not a problem with CSV but with the provider's lack of data literacy.
I worked in a web shop which had to produce spreadsheets which people wanted to look at in Excel. I gave them so many options, and told each client to experiment and choose the option which worked for them. In the end, we had (a) UTF-8 CSV, (b) UTF-8 CSV with BOM, (c) UTF-16 TSV, (d) UTF-8 HTML table with a .xlsx file extension and a lying Content-Type header which claimed it was an Excel spreadsheet.
Option a worked fine so long as none of the names in the spreadsheet had any non-ASCII characters.
Option d was by some measures the worst (and was definitely the largest file size), but it did seem to consistently work in Excel and Libre Office. In fact, they all worked without any issue in Libre Office.
I agree and as a result I have completely abandoned CSV.
I use the industry standard that everyone understands: ECMA-376, ISO/IEC 29500 aka .xlsx.
Nobody has any problems producing or ingesting .xlsx files. The only real problem is the confusion between numbers and numeric text that happens when people use excel manually. For machine to machine communication .xlsx has never failed me.
Now you might argue that ECMA-376 accounts for this, because it has a `date1904` flag, which has to be 0 for 1900-based dates and 1 for 1904-based dates. But what does that really accomplish if you can’t be sure that vendors understand subtleties like that if they produce or consume it? Last time I checked (maybe 8 years ago), spreadsheets created on Windows and opened on Mac still shifted dates by four years, and the bug was already over twenty years old at that time.
And the year-1904 issue is just the one example that I happen to know.
I have absolutely zero confidence in anything that has touched, or might have touched, MS Excel with anything short of a ten-foot pole.
Parsing Excel files in simple data interchange use cases that don't involve anyone manually using spreadsheets is an instance of unnecessary complexity. There are plenty of alternatives to CSV that remain plaintext, have much broader support, and are more rigorous than Excel in ensuring data consistency. You can use JSON, XML, ProtoBuf, among many other options.
If only there were character codes specifically meant to separate fields and records.... we wouldn't have to worry so much about quoted commas or quoted quotes.
That isn't solving anything, just changing the problem. If I want to store a string containing 0x1C - 0x1F in one of the columns then we're back in the exact same situation while also losing the human readable/manually typeable aspect people seem to love about CSV. The real solution is a strict spec with mandatory escaping.
But it is terrible at that because there is no widely adhered to standard[1], the sender and receiver often disagree on the details of what exactly a CSV is.
[1]: yes, I know about RFC 4180. But csvs in the wild often don't follow it.
The only times I hated CSV was when it came from another system I had no control over. For example Windows and their encodings, or some other proprietary BS.
But CSV under controlled circumstances is very simple.
And speaking of Wintendo, the bonus is often that you can go straight from CSV to Excel presentation for the middle management.
The post should at least mention in passing the major problem with CSV: it is a "no spec" family of de-facto formats, not a single thing (it is an example of "historically grown").
And omission of that meams I'm going to have to call this our for its bias (but then it is a love letter, and love makes blind...).
Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files, and there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa. Quoting, escaping, UTF-8 support are particular problem areas, but also that you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Having worked extensively with SGML for linguistic corpora, with XML for Web development and recently with JSON I would say programmatically, JSON is the most convenient to use regarding client code, but also its lack of types makes it useful less broadly than SGML, which is rightly used by e.g. airlines for technical documntation and digital humanities researchers to encode/annotate historic documents, for which it is very suitable, but programmatically puts more burden on developers. You can't have it all...
XML is simpler than SGML, has perhaps the broadest scope and good software support stack (mostly FOSS), but it has been abused a lot (nod to Java coders: Eclipse, Apache UIMA), but I guess a format is not responsible for how people use or abuse it. As usual, the best developers know the pros and cons and make good-taste judgments what to use each time, but some people go ideological.
(Waiting for someone to write a love letter to the infamous Windows INI file format...)
In fairness there are also several ambiguities with JSON. How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
jq supports several pseudo-JSON formats that are quite useful like record separator separated JSON, newline separated JSON. These are obviously out of spec, but useful enough that I've used them and sometimes piped them into a .json file for storage.
Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
> How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
This is also an issue, due to the way that order of keys are working in JavaScript, too.
> record separator separated JSON, newline separated JSON.
There is also JSON with no separators, although that will not work very well if any of the top-level values are numbers.
> Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
Yes, as well as non-Unicode text (including (but not limited to) file names on some systems), and (depending on the implementation) 64-bit integers and big integers. Possibly also date/time.
I think DER avoids these problems. You can specify whether or not the order matters, you can store Unicode and non-Unicode text, NaN and Infinity, raw byte arrays, big integers, and date/time. (It avoids some other problems as well, including canonization (DER is already in canonical form) and other issues. Although, I have a variant of DER that avoids some of the excessive date/time types and adds a few additional types, but this does not affect the framing, which can still be parsed in the same way.)
A variant called "Multi-DER" could be made up, which is simply concatenating any number of DER files together. Converting Multi-DER to BER is easy just by adding a constant prefix and suffix. Converting Multi-DER to DER is almost as easy; you will need the length (in bytes) of the Multi-DER file and then add a prefix to specify the length. (In none of these cases does it require parsing or inspecting or modifying the data at all. However, converting the JSON variants into ordinary JSON does require inspecting the data in order to figure out where to add the commas.)
“No one owns CSV. It has no real specification (yes, I know about the controversial ex-post RFC 4180), just a set of rules everyone kinda agrees to respect implicitly. It is, and will forever remain, an open and free collective idea.”
They even seem to think it is a good thing. But I don't see how not having a bunch of implementations that can't agree on the specifics of a file/interchange format is a good thing. And being free and open is completely orthogonal. There are many proprietary formats that don't have a spec, and many open formats that do have a spec (like, say, json).
Waiting for someone to write a love letter to the infamous Windows INI file format
I actually miss that. It was nice when settings were stored right alongside your software, instead of being left behind all over a bloated registry. And the format was elegant, if crude.
I wrote my own library for encoding/writing/reading various datatypes and structure into ini's, in a couple different languages, and it served me well for years.
> instead of being left behind all over a bloated registry
Really? I think the idea of a central, generic, key-value pair database for all the setting on a system is probably the most elegant reasonable implementation there could be.
The initial implementation of Windows Registry wasn't good. It was overly simplistic and pretty slow. Though the "bloat" (what ever that means) of registry hasn't been an actual issue in over 20 years. The only people invested in convincing you "it's an issue" are CCleaner type software that promise to "speed up your computer" if you just pay $6.99.
How many rows do you need in a sqlite database for it to be "bloated"?
As someone living in a country where , is used as the decimal separator, I cannot begin to describe the number of times CSV data has caused me grief. This becomes especially common in an office environment where Excel is the de facto only data handling tool that most people can and will use. Here the behavior of loading data becomes specific to the individual machine and changes over time (e.g. when IT suddenly forces a reset of MS Office application languages to the local one).
That said, I don't really know of any alternative that won't be handled even worse by my colleagues...
To be honest, I'm wondering why you are rating JSON higher than CSV.
> Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files,
There is, actually, RFC 4180 IIRC.
> there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa.
"There are many flavours that deviate from the spec" is a JSON problem too.
> you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Also a problem in JSON
> Quoting, escaping, UTF-8 support are particular problem areas,
Sure, but they are no more nor no less a problem in JSON as well.
Have you had to work with csv files from the wild much? I'm not being snarky but what you're talking about is night and day to what I've experienced over the years.
There aren't vast numbers of different JSON formats. There's practically one and realistically maybe two.
Headers are in each line, utf8 has never been an issue for me and quoting and escaping are well defined and obeyed.
This is because for datasets, almost exclusively, the file is machine written and rarely messed with.
Csv files have all kinds of separators, quote characters, some parsers don't accept multi lines and some do, people sort files which mostly works until there's a multi line. All kinds of line endings, encodings and mixed encodings where people have combined files.
I tried using ASCII record separators after dealing with so many issues with commas, semicolons, pipes, tabs etc and still data in the wild had these jammed into random fields.
Lots of these things don't break when you hit the issue either, the parsers happily churn on with garbage data, leading to further broken datasets.
Also they're broken for clients if the first character is a capital I.
- “Each record is located on a separate line, delimited by a line break (CRLF)” ⇒ editing .csv files using your the typical Unix text editor is complicated.
- “Spaces are considered part of a field and should not be ignored”
- “Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes” ⇒ fields containing lone carriage returns or new lines need not be enclosed in double quotes.
INI was for a long time a seemingly preferable format in the Python community for configuration for a long time, as I recall it.
Haven’t been a full time Python dev in sometime though, it seems TOML has supplanted that, but I remember thinking how interesting it was that Python had a built in INI parser and serializer
I lived through SOAP/WSDL horror with their numerous standards and the lack of compatibility between stacks in different programming languages. Having seen abused XML, CSV formats. CSV is preferable over XML. Human-readability matters. Relative simplicity matters.
Despite JSON may also be interpreted differently by different tools, it is a good default choice for communicating between programs
> lack of compatibility between stacks in different programming languages
Well, that sure beats OpenAPI lack of compatibility between stacks in the same programming language.
I think the fact one can't randomly concatenate strings and call it "valid XML" a huge bonus over the very common "join strings with comma and \r\n", non-rfc4180 compliant (therefore mostly unparseable without human/LLM interaction) garbage people often pretend is CSV.
While CSV isn't exactly grammared or standardised like XML I think if it as more schema:d than JSON. There might be data corruption or consistency issues, but there is implicitly a schema: every line is exactly n fields, and the first line might contain field names.
When a JSON API turns out to have optional fields it usually shows through trial and error, and unlike CSV it's typically not considered a bug you can expect the API owner to fix. In CSV 'missing data' is an empty string rather than nulls or their cousins because missing fields aren't allowed, which is nice.
I also like that I can write my own ad hoc CSV encoder in most programming languages that can do string concatenation, and probably also a suitable decoder. It helps a lot in some ETL tasks and debugging. Decent CSV also maps straight to RDBMS tables, if the database for some reason fails at immediate import (e.g. too strict expectations) into a newly created table it's almost trivial to write an importer that does it.
JSON is not schema’d per se and intentionally so. There’s jsonschema which has better expressiveness than inference of a tabular schema, as it can reflect relationships.
There is no file format that works out of box under all extreme corner cases.
You would think that ie XML-defined WSDL with XSD schema is well battle proven. I've encountered 2 years ago (and still dealing with that) WSDL from a major banking vendor that is technically valid, but no open source library in Java (from all languages) was able to parse it successfully or generate binding classes out of box.
Heck, flat files can end up with extreme cases, just work enough with legacy banking or regulatory systems and you will see some proper shit.
The thing is, any sort of critical integration needs to be battle tested and continuously maintained, otherwise it will eventually go bad, even a decade after implementation and regular use without issues.
XML is a pretty good markup language. Using XML to store structured data is an abuse of it. All the features that are useful for markup are not useful for structured data and only add to the confusion.
> Waiting for someone to write a love letter to the infamous Windows INI file format...
Honestly, it’s fine. TOML is better if you can use it, but otherwise for simple applications, it’s fine. PgBouncer still uses INI, though that in particular makes me twitch a bit, due to discovering that if it fails to parse its config, it logs the failed line (reasonable), which can include passwords if it’s a DSN string.
Well, once you get over the fact that information on a TOML file can be out of order in any place, denominated by any mix of 3 different key encodings, and broken down in any random way... then yes, the rest of TOML is good.
CSV is ever so elegant but it has one fatal flaw - quoting has "non-local" effects, i.e. an extra or missing quote at byte 1 can change the meaning of a comma at byte 1000000. This has (at least) two annoying consequences:
1. It's tricky to parallelise processing of CSV.
2. A small amount of data corruption can have a big impact on the readability of a file (one missing or extra quote can bugger the whole thing up).
So these days for serialisation of simple tabular data I prefer plain escaping, e.g. comma, newline and \ are all \-escaped. It's as easy to serialise and deserialise as CSV but without the above drawbacks.
> It’s that libraries and tools to parse CSV tend to suck. Whereas JSON is the lingua franca of data.
This isn't the case. An incredible amount of effort and ingenuity has gone into CSV parsing because of its ubiquity. Despite the lack of any sort of specification, it's easily the most widely supported data format in existence in terms of tools and language support.
It's just that people tend to use specialized tools for encoding and decoding it instead of like ",".join(row) and row.split(",")
I have seen people try to build up JSON strings like that too, and then you have all the same problems.
So there is no problem with CSV except that maybe it's too deceptively simple. We also see people trying to build things like URLs and query strings without using a proper library.
Until you have a large amount of data & need either random access or to work on multiple full columns at once. Duplicated keys names mean it's very easy for data in jsonlines format to be orders of magnitude larger than the same data as CSV, which is incredibly annoying if your processing for it isn't amenable to streaming.
Eh, it really isn't. The format does not lend itself to tabular data, instead the most natural way of representing data involves duplicating the keys N times for each record.
(...and many more.) "This comes up every single time someone mentions CSV. Without fail." - top reply from burntsushi in that last link, and it remains as true today as in 2017 :D
You're not wrong though, we just need some major text editor to get the ball rolling and start making some attempts to understand these characters, and the rest will follow suit. We're kinda stuck at a local optimum which is clearly not ideal but also not troublesome enough to easily drum up wide support for ADSV (ASCII Delimiter Separated Values).
If there were visible well known characters that could be printed for those and keys on a keyboard for inputting them we would probably have RSV files. Because they are buried down in the nonprintable section of the ASCII chart they are a pain for people to deal with. All it would have taken is one more key on the keyboard, maybe splitting the tab key in half.
Can't type them on a keyboard I guess, or generally work with them in the usual text-oriented tools? Part of the appeal of CSV is you can just open it up in Notepad or something if you need to. Maybe that's more a critique of text tools than it is of ASCII record separator characters.
I was really excited when I learned of these characters, but ultimately if it’s in ASCII then it’s in-band and will eventually require escaping leading to the same problem.
> I don't get how this is not widely used as standard.
It requires bespoke tools for edition, and while CSV is absolute garbage it can be ingested and produced by most spreadsheet software, as well as databases.
I couldn't find the story on it, but there was an instance of a config for some major service getting truncated, but since it was yaml it was more difficult to figure out that that was what happened. I think in AWS, but I can't find the story, so can't really remember.
And fully fair that you can have similar issues in other formats. I think the complaint here was that it was a bit harder, specifically because it did not trip up any of the loading code. With a big lesson learned that configs should probably either go pascal string style, where they have an expected number of items as the first part of the data, or xml style, where they have a closing tag.
Really, it is always amusing to find how many of the annoying parts of XML turned out to be somewhat more well thought out than people want to admit.
Really depends on how the CSV is generated/transferred. If the output of the faulting software is line-buffered then it's quite likely that a failure would terminate the file at a line break.
I want to push Sqlite as a data interchange format! it has the benefit of being well defined, and can store binary data, like images for product pictures inside the database. not a good idea if you're trying to serve users behind a web app, but as interchange, better than a zip file with filenames that have to be "relinked".
For context: I have a LOT of experience of interchange formats, like "full time job, every day, all day, hundreds of formats, for 20-years" experience.
Based on that experience I have come to one key, but maybe, counter-intuitive truth about interchange formats:
- Too much freedom is bad.
Why? Generating interchange data is cheaper than consuming it, because the creator only needs to consider the stuff they want to include, whereas the consumer needs to consider every single possible edge case and or scenario the format itself can support.
This is why XML is WAY more costly to ingest than CSV, because in XML someone is going to use: attributes, CDATA, namespaces, comments, different declaration, includes, et al. In CVS they're going to use rows, a format separator, and quotes (with or without escaping). That's it. That's all it supports.
Sqlite as an interchange format is a HORRIFYING suggestion, because every single feature Sqlite supports may need to be supported by consumers. Even if you curtailed Sqlite's vast feature set, you've still created something vastly more expensive to consume than XML, which itself is obnoxious.
My favorite interchange formats are, in order:
- CVS, JSON (inc. NDJSON), YAML, XML, BSON (due to type system), MessagePack, Protobuf, [Giant Gap] Sqlite, Excel (xlsx, et al)
More features mean more cost, more edge cases, more failures, more complex consumers. Keep in mind, this is ONLY about interchange formats between two parties, I have wildly different opinions about what I would use for my own application where I am only ever the creator/consumer, I actually love Sqlite for THAT.
Oh, this is interesting. Are you tying different systems together? If so, do you use some preferred intermediate format? Do you have a giant library of * -> intermediate -> * converters that you sprinkle between everything? Or maybe the intermediate format is in memory?
Not the person you were replying to, but from my experience CSV is a good place to define data types coming into a system and a safe way to dump data as long as I write everything down.
So I might do things like have every step in a pipeline begin development by reading from and writing to CSV. This helps with parallel dev work and debugging, and is easy to load into any intermediate format.
> do you use some preferred intermediate format?
This is usually dictated by speed vs money calculations, weird context issues, and familiarity. I think it's useful to look at both "why isn't this a file" and "why isn't this all in memory" perspectives.
For tabular/time-series greater than 100k rows I personally feel like parquet cannot be beat. It's self-describing, strongly-typed, relatively compact, supports a bunch of io/decode skipping, and is quite fast.
Orc also looks good, but isn't well supported. I think parquet is optimal for now for most analytical use-cases that don't require human readability.
It is an interchange format, so it is inter-system by virtue of that. If I am a self-creator/consumer the format I use can be literally anything even binary memory dumps.
Interesting! I've dealt with file interchange between closed source (and a couple open source) programs, but that was a while ago. I've also had to deal with csvs and xslts between SaaS vendors for import export of customer's data. I've done a bunch of reverse engineering of proprietary formats so we could import the vendor's files, which had more information than they were willing to export in an interchange format. Sometimes they're encrypted and you have to break it.
What you say is fair. Csv is underspecified though, there's no company called csv that's gonna sue for trademark enforcement, there's no official csv standard library that everyone uses. (They exist are some but there are so many naive importations because from first principles, because how hard could it be? output records and use a comma and newline (of which there are three possible options)).
How often do you deal with multiple Csv files to represent multiple tables that are actually what's used by vendors internally, vs one giant flattened Csv with hundreds of columns and lots of empty cells? I don't have your level of experience with csvs, but I've dealt with a them being a mess, where the other side implement whatever they think is reasonable given the name "comma separated values".
With sqlite, we're in the Internet age and so I presume this hypothetical developer would use the sqlite library and not implement their own library from scratch for funsies. This then leads to types, database normalization, multiple tables. I hear you that too many choices can bad, and xml is a great example of this, but sqlite isn't xml and isn't Csv.
It's hard to have this discussion in the abstract so I'll be forthcoming about where I'm coming from, which is Csv import export between vendors for stores, think like Doordash to UberEATS. the biggest problem we have is images of the items, and how to deal with that. It's an ongoing issue how to get them, but the failure mode, which does happen, is that when moving vendor, they just have to redo a lot of work that they shouldn't have to.
Ultimately the North Star I want to push towards is moving beyond csvs, because it'll let a people who currently have to hand edit the Csv so every row imports properly, not have to do that. They'd still exist, but instead have to deal with, well, what you see with XML files. which has its shortcomings, as you mention, but at least once how a vendor is using it is understood, individual records are generally understandable.
I was moved so I don't deal with import export currently, but it's because sqlite is so nice to work with on personal projects where it's appropriate that I want to push the notion of moving to sqlite over csvs.
One very minor problem is that you max out storing blobs of 2GB(? I think, maybe 4GB). Granted few will hit this, but this limit did kill one previous data transfer idea of mine.
> not a good idea if you're trying to serve users behind a web app
I use Sqlite for a static site! Generating those static pages out to individual pages would involve millions of individual files. So instead I serve up a sqlite database over http, and use a sqlite wasm driver [0] to load (database) pages as needed. Good indexing cuts down on the number of pages it grabs, and I can even get full text search!
Only feature I'm missing is compression, which is complicated because for popular extensions like sqlite-zstd written in Rust.
I don't understand why CSV became a thing when TSV, or a format using the nowadays weird ASCII control characters like start/end of text, start of heading, horizontal/vertical tab, file/group/record/unit separator.
It seems many possible designs would've avoided the quoting chaos and made parsing sort of trivial.
Any time you have a character with a special meaning you have to handle that character turning up in the data you're encoding. It's inevitable. No matter what obscure character you choose, you'll have to deal with it
I understand this argument in general. But basically everyone has some sort of spreatsheet application that can read CSV installed.
In some alternate worked where this "binary" format caught on it would be a very minor issue that it isn't human readable because everyone has a tool that is better at reading it than humans are. (See the above mentioned non-local property of quotes where you may think you are reading rows but are actually inside a single cell.)
Makes me also wonder if something like CBOR caught on early enough we would just be used to using something like `jq` to read it.
Except we have all these low ASCII characters specifically for this purpose that don't turn up in the data at all. But there is, of course, also an escape character specifically for escaping them if necessary.
Even if you find a character that really is never in the data - your encoded data will contain it. And it's inevitable that someone encodes the encoded data again. Like putting CSV in a CSV value.
You can't type any of those on a typewriter, or see them in old or simple simple editors, or no editor like just catting to a tty.
If you say those are contrived examples that don't matter any more then you have missed the point and will probably never acknowledge the point and there is no purpose in continuing to try to communicate.
One can only ever remember and type out just so many examples, and one can always contrive some response to any single or finite number of examples, but they are actually infinite, open-ended.
Having a least common denominator that is extremely low that works in all the infinite situations you never even thought of, vs just pretty low and pretty easy to meet in most common situations, is all the difference in the world.
The difference is that the coma and newline characters are much more common in text than 0x1F and 0x1E, which if you restrict your data to alphanumeric characters (which you really should) will never appear anywhere else.
Exactly. "Use a delimiter that's not in the data" is not real serialisation, it's fingers-crossed-hope-for-the-best stuff.
I have in the past does data extractions from systems which really can't serialise properly, where the only option is to concat all the fields with some "unlikely" string like @#~!$ as a separator, then pick it apart later. Ugh.
> Exactly. "Use a delimiter that's not in the data" is not real serialisation, it's fingers-crossed-hope-for-the-best stuff.
It's not doing just this, you pick something that's likely not in the data, and then escape things properly. When writing strings you can write a double quote within double quotes with \", and if you mean to type the designated escape character you just write it twice, \\.
The only reason you go for something likely not in the data is to keep things short and readable, but it's not impossible to deal with.
I agree that it's best to pick "unlikely" delimiters so that you don't have to pepper your data with escape chars.
But some people (plenty in this thread) really do think "pick a delimiter that won't be in the data" - and then forget quoting and/or escaping - is a viable solution.
The characters would likely be unique, maybe even by the spec.
Even if you wanted them, we use backslashes to escape strings in most common programming languages just fine, the problem CSV is that commas aren't easy to recognize because they might be within a single or double quote string, or might just be a separator.
Can strings in CSV have newlines? I bet parsers disagree since there's no spec really.
In TSV as commonly implemented (for example, the default output format of Postgres and MySQL), tab and newline are escaped, not quoted. This makes processing the data much easier. For example, you can skip to a certain record or field just by skipping literal newlines or tabs.
It's a lot easier to type comma than control characters and it's a lot easier to view comma than a tab (which might look like a space).
For automated serialization, plain text formats won out, because they are easy to implement a minimal working solution (both import and export) and more importantly, almost all systems agree on what plain text is.
We don't really have Apple formatted text, that will show up as binary on windows. Especially if you are just transferring id's and numbers, those will fall within ascii and that will work even if you are expecting unicode.
What issues would TSV have? As commonly implemented (for example, the default output format of Postgres and MySQL), in TSV, tab and newline are escaped, not quoted.
CSV's actual problem is that there's no single CSV, and you don't know what type you have (or even if you have single consistent type through the whole file) without trying to parse the whole file and seeing what breaks. Is there quoting? Is that quoting used consistently? Do you have five-digit ZIP codes, or have the East Coast ones been truncated to four digits because they began with zero? Spin the wheel!
> So these days for serialisation of simple tabular data I prefer plain escaping, e.g. comma, newline and \ are all \-escaped. It's as easy to serialise and deserialise as CSV but without the above drawbacks.
For my own parser, I made everything `\` escaped: outside of a quote or double-quote delimited string, any character prefixed with a `\` is read verbatim. There are no special exceptions resulting in `\,` producing a comma while `\a` produces `\a`. This makes it a good rule, because it is only one rule with no exceptions.
I considered this but then went the other way - a \ before anything other than a \, newline or comma is treated as an error. This leaves room for adding features, e.g. \N to signify a SQL NULL.
Regarding quoting and escaping, there are two options that make sense to me - either use quoting, in which case quotes are self-escaped and that's that; or use escaping, in which case quotes aren't necessary at all.
A good way to parallelize CSV processing is to split datasets into multiple files, kinda like manual sharding. xan has a parallel command able to perform a wide variety of map-reduce tasks on splitted files.
I don't. If I ever have a dataset that requires newlines in a string, I use another method to store it.
I don't know why so many people think every solution needs to to be a perfect fit for every problem in order to be viable. CSV is good at certain things, so use it for those things! And for anything it's not good at, use something else!
True, but in that case I'm not the one choosing how to store it, until I ingest the data, and then I will store it in whatever format makes sense to me.
I don't think we do? It's more that a bunch of companies already have their data in CSV format and aren't willing to invest any effort in moving to a new format. Doesn't matter how much one extolls all the benefits, they know right? They're paying someone else to deal with it.
Of course. I’m not saying I roll my own parser for every project that uses a CSV file, I’m just describing my criteria for using CSV vs some other format when I have the option.
Eh, all you're really saying is "I'm not using CSV. Instead I'm using my CSV." Except that's all that anybody does.
CSV can just as easily support escaping as any other format, but there is no agreement for a CSV format.
After all, a missed escape can just as easily destroy a JSON or XML structure. And parallel processing of text is already a little sketchy simply because UTF-8 exists.
I'm not clear why quotes prevent parallel processing?
I mean, you don't usually parallelize reading a file in the first place, only processing what you've already read and parsed. So read each record in one process and then add it to a multiprocessing queue for multiple processes to handle.
And data corruption is data corruption. If a movie I'm watching has a corrupted bit I don't mind a visual glitch and I want it to keep playing. But with a CSV I want to fix the problem, not ignore a record.
Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
> I'm not clear why quotes prevent parallel processing?
Because of the "non-local" effect of quotes, you can't just jump into the middle of a file and start reading it, because you can't tell whether you're inside a quoted section or not. If (big if) you know something about the structure of the data, you might be able to guess. So that's why I said "tricky" instead of "impossible".
Contrast to my escaping-only strategy, where you can jump into the middle of a file and fully understand your context by looking one char on either side.
> Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
I used to be a data analyst at a management consultancy. A very common scenario would be that I'm handed a multi-gigabyte CSV and told to "import the data". No spec, no schema, no nothing. Data loss or corruption is totally unacceptable, because we were highly risk-sensitive. So step 1 is to go through the whole thing trying to determine field types by testing them. Does column 3 always parse as a timestamp? Great, we'll call it a timestamp. That kind of thing. In that case, it's great to be able to parallelise reading.
> And data corruption is data corruption
Agreed, but I prefer data corruption which messes up one field, not data corruption which makes my importer sit there for 5 minutes thinking the whole file is a 10GB string value and then throw "EOF in quoted field".
Doing sequential reading into a queue for workers to read is a lot more complicated than having a file format that supports parallel reading.
And the fix to allow parallel reading is pretty trivial: escape new lines so that you can just keep reading until the first unescaped new line and start at that record.
It is particularly helpful if you are distributing work across machines, but even in the single machine case, it's simpler to tell a bunch of workers their offset/limit in a file.
Tab-Separated Value, as implemented by many databases, solves these problems, because tab, newline and other control characters are escaped. For example, the default text serialization format of Postgres (`COPY <table> TO '<file>'` without any options) is this way.
Anyone with a love of CSV hasn't been asked to deal with CSV-injection prevention in an enterprise setting, without breaking various customer data formats.
That mostly breaks down to "excel is intentionally stupid with csv files if you don't use the import function to open them" along with the normal "don't trust customer input without stripping or escaping it" concerns you'd have with any input.
That was my initial reaction as well – it's a vulnerability in MS software, not ours, not our problem. Unfortunately, reality quickly came to bear: our customers and employees ubiquitously use excel and other similar spreadsheet software, which exposes us and them to risk regardless where the issue lies. We're inherently vulnerable because of the environment we're operating in, by using CSV.
"don't trust customer input without stripping or escaping it" feels obvious, but I don't think it stands up to scrutiny. What exactly do you strip or escape when you're trying to prevent an unknown multitude of legacy spreadsheet clients that you don't control from mishandling data in an unknown variety of ways? How do you know you're not disrupting downstream customer data flows with your escaping? The core issue, as I understand it, stems from possible unintended formula execution – which can be prevented by prefixing certain cells with a space or some invisible character (mentioned in the linked post above). This _does_ modify customer data, but hopefully in a way that unobtrusive enough to be acceptable. All in all, it seems to be a problem without a perfect solution.
Hey, I'm the author of the linked article, cool to see this is still getting passed around.
Definitely agree there's no perfect solution. There's some escaping that seems to work ok, but that's going to break CSV-imports.
An imperfect solutions is that applications should be designed with task-driven UIs so that they know the intended purpose of a CSV export and can make the decision to escape/not escape then. Libraries can help drive this by designing their interfaces in a similar manner. Something like `export_csv_for_eventual_import()`, `export_csv_for_spreadsheet_viewing()`.
Another imperfect solution would be to ... ugh...generate exports in Excel format rather than CSV. I know, I know, but it does solve the problem.
Or we could just get everyone in the world to switch to emacs csv-mode as a csv viewer. I'm down with that as well.
Appreciate your work! Your piece was pivotal in changing my mind about whether this should be considered in our purview to address.
The intention-based philosophy of all this makes a lot of sense, was eye opening, and I agree it should be the first approach. Unfortunately after considering our use cases, we quickly realized that we'd have no way of knowing how customers intend to use the csv exports they've requested - we've talked to some of them and it's a mix. We could approach things case by case but we really just want a setup which works well 99% of the time and mitigates known risk. We settled on the prefixing approach and have yet to receive any complaints about it, specifically using a space character with the mind that something unobtrusive (eg. easily strippable) but also visible, would be best - to avoid quirks stemming from something completely hidden.
Thank again for your writing and thoughts, like I said above I haven't found much else of quality on the topic.
I’ve almost always found the simple way around Excel users not knowing how to safely use CSV files is to just give the file another extension: I prefer .txt or .dat
Then, the user doesn’t have Excel has the default program for opening the file and has to jump through a couple safety hoops
If your customers and employees are using Excel then stop going against the grain with your niche software developer focused formats that need a lot of explanations.
I need to interface with a lot of non-technical people who exclusively use Excel. I give them .xlsx files. It's just as easy to export .xlsx as it is to export .CSV and my customers are happy.
How is .csv a niche dev-focused format? Our customers use our exports for a mix of purposes, some of them involving spreadsheet clients (not just excel) and some of them integrating with their own data pipelines. Csv conveniently works with these use cases across the board, without explanation, and is inconveniently saddled with these legacy security flaws in Excel (and probably other clients).
If xlsx works for all your use cases that's great, a much better solution that trying to sidestep these issues by lightly modifying the data. It's not an option for us, and (I'd imagine) a large contingent of export tools which can't make assumptions about downstream usage.
Someone filed a bug report on a project I work on, saying that it was a security vulnerability that we don't prefix cell values with a single quote (') when the cell content contains certain values like an equal sign (=). They said this can cause Excel to evaluate the content and potentially run unsafe code.
I responded that this was Excel's problem, not ours, and that nobody would assign a CVE to our product for such a "vulnerability". How naive I was! They forwarded me several such CVEs assigned to products that create CSVs that are "unsafe" for Excel.
I agree with the characterization ("security theater") of these bug reports. The problem is that the intentions of these reports don't make the potential risk less real, depending on the setting, and I worry that the "You're just looking for attention" reaction (a very fair one!) leads to a concerning downplaying of this issue across the web.
As a library author, I agree this very well may not be something that needs to be addressed. But as someone working in a company responsible for customers, employees, and their sensitive information, disregarding this issue disregards the reality of the tools these people will invariably use, downstream of software we _are_ responsible for. Aiming to make this downstream activity as safe as possible seems like a worthy goal.
I didn't know about the formula injection, I just knew that Excel and Sheets mangle my dates every time and it drives me bonkers. Why is that the default? It makes no sense.
The best part about csv, anyone can write a parser in 30 minutes meaning that I can take data from the early '90s and import it into a modern web service.
The worst part about CSV, anyone can ride a parser in about 30 minutes, meaning that it's very easy to get incorrect implementations, incorrect data, and other strange undefined behaviors. But to be clear json, and yaml also have issues with everyone trying to reinvent the wheel constantly. XML is rather ugly, but it seems to be the most resilient.
until you find someone abusing XSD schemas, or someone designing a "dynamically typed" XML... or sneaks in extra data in comments - happened to me way often than it should.
You know what grinds my gears about using XSD for message definitions? Namespaces. Namespaces are a good idea and were done well in XML, as far as I can see, but with XSD you run into this [problem][1]:
Namespaces are used to qualify tags and attributes in XML elements. But they're also used by XSD to qualify the names of types defined in the schema. A sequence element's type is indicated by the value of its "type" attribute. The attribute value is a string that is the namespace-qualified name of the type.
So, if you want to change the alias of an XML namespace in an XSD schema, you can't just use your XML library's facilities for namespace management. You also have to go find the "type" attributes (but not all of the "type" attributes), parse their values, and do the corresponding alias change in the type name.
Don't use a string for a thing that is not a string! I guess in XML attributes you have no choice. XAML improved on the situation a bit.
I also love CSV for its simplicity. A key part of that love is that it comes from the perspective of me as a programmer.
Many of the criticisms of CSV I'm reading here boil down to something like: CSV has no authoritative standard, and everyone implements it differently, which makes it bad as a data interchange format.
I agree with those criticisms when I imagine them from the perspective of a user who is not also a programmer. If this user exports a CSV from one program, and then tries to load the CSV into a different program, but it fails, then what good is CSV to them?
But from the perspective of a programmer, CSV is great. If a client gives me data to load into some app I'm building for them, then I am very happy when it is in a CSV format, because I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file.
Parsing CSV is quick and fun if you only care about parsing one specific file. And that's the key: It's so quick and fun, that it enables you to just parse anew each time you have to deal with some CSV file. It just doesn't take very long to look at the file, write a row-processing loop, and debug it against the file.
The beauty of CSV isn't that it's easy to write a General CSV Parser that parses every CSV file in the wild, but rather that its easy to write specific CSV parsers on the spot.
Going back to our non-programmer user's problem, and revisiting it as a programmer, the situation is now different. If I, a programmer, export a CSV file from one program, and it fails to import into some other program, then as long as I have an example of the CSV format the importing program wants, I can quickly write a translator program to convert between the formats.
There's something so appealing about to me about simple-to-parse-by-hand data formats. They are very empowering to a programmer.
Totally agree that its biggest strength is how approachable it is for quick, ad hoc tooling. Need to convert formats? Join two datasets? Normalize a weird export? CSV gives you just enough structure to work with and not so much that it gets in your way.
> I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file
This is fine if you can hand-check all the data, or if you are okay if two offsetting errors happen to corrupt a portion of the data without affecting all of it.
Also I find it odd that you call it "easy" to write custom code to parse CSV files and translate between CSV formats. If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy." When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.
Like, I get that a farmer a couple hundred years ago would describe plowing a field with a horse as "easy," but given the emergence of alternatives, you wouldn't use the word in that context anymore.
> If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy."
But it isn't that reliably easy with JSON. Sometimes I have clients give me data that I just have to work with, as-is. Maybe it was invalid JSON spat out by some programmer or tool long ago. Maybe it's just from a different department than my contact, which might delay things for days before the bureaucracy gets me a (hopefully) valid JSON.
I consider CSV's level of "easy" more reliable.
And even valid JSON can be less easy. I've had experiences where writing the high-level parsing for some JSON file, in terms of a JSON library, was less easy and more time-consuming than writing a custom CSV parser.
Subjectively, I think programming a CSV parser from basic programming primitives is just more fun and appealing than programming in terms of a JSON library or XML library. And I find the CSV code is often simpler and quicker to write.
> When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.
But the premise of CSV is so simple, that there are only four quirks to empirically discover: cell delimiter, row delimiter, quote, escaped-quote.
I think it's "easy" to peek at the file and say, "Oh, they use semicolon cell delimiters."
And it's likewise "easy" to write the "custom logic", which is about as simple as parsing something directly from a text stream gets. I typically have to stop and think a minute about the quoting, but it's not that bad.
If a programmer is practiced at parsing from a text stream (a powerful, general skill that is worth exercising), than I think it is reasonable to think they might find parsing CSV by hand to be easier and quicker than parsing JSON (etc.) with a library.
I like CSV for the same reasons I like INI files. It's simple, text based, and there's no typing encoded in the format, it's just strings. You don't need a library.
They're not without their drawbacks, like no official standards etc, but they do their job well.
Similarly I had once loved the schemaless datastorages. They are so much simpler!
Until I worked quite a bit with them and realized that there's always schema in the data, otherwise it's just random noise. The question is who maintains the schema, you or a dbms.
Re. formats -- the usefulness comes from features (like format enforcing). E.g. you may skip .ini at all and just go with lines on text files, but somewhere you still need to convert those lines to your data, there's no way around it, the question is who's going to do that (and report sane error messages).
My experience has indicated the exact opposite. CSVs are the only "structured" format nobody can claim to parse 100% (ok probably not true thinking about html etc, just take this as hyperbole.) Just use a well-specified format and save your brain-cells.
Occasionally, we must work with people who can only export to csv. This does not imply csv is a reasonable way to represent data compared to other options.
CSV works because CSV is understood by non technical people who have to deal with some amount of technicality. CSV is the friendship bridge that prevents technical and non technical people from going to war.
I can tell an MBA guy to upload a CSV file and i'll take care of it. Imagine i tell him i need everything in a PARQUET file!!! I'm no longer a team player.
Indeed the my main use is most financial services will output your records in csv, although I mostly open that in excel which sometimes gets a bit confused.
This is incorrect. Everyone uses Excel, not CSV. There are billions of people on this planet who know what to do with an .xlsx file.
Do the same with a .CSV file and you'll have to teach those people how to use the .CSV importer in Excel and also how to set up the data types for each column etc. It's a non trivial problem that forces you down to a few million people.
.CSV is a niche format for inexperienced software developers.
Among the shit I have seen in CSV, no " for strings, including those with a return char, innovative SEP, date, numbers, no escape for " within strings, rows related to the reporting tools used to export to CSV etc
I wish this was a joke. I'm always trying to convince data scientists with a foot in the open source world that their life will be so much better if they use parquet or Stata or Excel or any other kind of file but CSV.
On top of all the problems people mention here involving the precise definition of the format and quoting, it's outright shocking how long it takes to parse ASCII numbers into floating point. One thing that stuck with me from grad school was that you could do a huge number of FLOPS on a matrix in the time it would take to serialize and deserialize it to aSCII.
Thats true, in recent years its been less of a disaster with lots of good csv libraries for various languages. In the 90s csv was a constant footgun, perhaps thats why they went crazy and came up with XML
I've recently been developing a raspberry pi based solution which works with telemetry logs. First implementation used an SQLite database (with WAL log) – only to find it corrupted after just couple of days of extensive power on/off cycles.
I've since started looking at parquet files – which turned out to not be friendly to append-only operations. I've ended up implementing writing events into ipc files which then periodically get "flushed" into the parquet files. It works and it's efficient – but man is it non-trivial to implement properly!
My point here is: for a regular developer – CSV (or jsonl) is still the king.
> First implementation used an SQLite database (with WAL log) – only to find it corrupted after just couple of days of extensive power on/off cycles.
Did you try setting `PRAGMA synchronous=FULL` on your connection? This forces fsync() after writes.
That should be all that's required if you're using an NVMe SSD.
But I believe most microSD cards do not even respect fsync() calls properly and so there's technically no way to handle power offs safely, regardless of what software you use.
I use SanDisk High Endurance SD cards because I believe (but have not fully tested) that they handle fsync() properly. But I think you have to buy "industrial" SD cards to get real power fail protection.
There's definitely a place for it. I ran into the same problem with a battery powered event logger. Basically alternate between sleep-until-event and sample-until-event-over.
SQLite was fine until the realities of that environment hit.
0) I need to save the most data over time and my power budget is unpredictable due to environmentals.
1) When should I commit? SQLite commit per insert slows down, impacts battery life, impacts sample rate. Practically you could get away with batching all data for a small period.
2) SQLite is slow to repair databases. Partially written file would often take longer to repair than we had battery to run.
CSV based format filled that niche. First column was line-column count to support firmware upgrades. Last column is line-checksum. Another column indicating if this line was the last for an event. Parser skips corrupted lines/entries.
> I've since started looking at parquet files – which turned out to not be friendly to append-only operations. I've ended up implementing writing events into ipc files which then periodically get "flushed" into the parquet files. It works and it's efficient – but man is it non-trivial to implement properly!
I think the industry standard for supporting this is something like iceberg or delta, it's not very lightweight, but if you're doing anything non-trivial, it's the next logical move.
The JSON version is only marginally bigger (just a few brackets), but those brackets represent the ability to be either simple or complex. This matters because you wind up with terrible ad-hoc nesting in CSV ranging from entries using query string syntax to some entirely custom arrangement.
And in these cases, JSON's objects are WAY better.
Because CSV is so simple, it's common for them to avoid using a parsing/encoding library. Over the years, I've run into this particular kind of issue a bunch.
//outputs `val1,val2,unexpected,comma,valN` which has one too many items
["val1", "val2", "unexpected,comma", "valN"].join(',')
JSON parsers will not only output the expected values every time, but your language likely uses one of the super-efficient SIMD-based parsers under the surface (probably faster than what you are doing with your custom CSV parser).
Another point is standardization. Does that .csv file use commas, spaces, semicolons, pipes, etc? Does it use CR,LF, or CRLF? Does it allow escaping quotations? Does it allow quotations to escape commas? Is it utf-8, UCS-2, or something different? JSON doesn't have these issues because these are all laid out in the spec.
JSON is typed. Sure, it's not a LOT of types, but 6 types is better than none.
While JSON isn't perfect (I'd love to see an official updated spec with some additional features), it's generally better than CSV in my experience.
They're completely skipping over the complications of header rows and front matter.
"8. Reverse CSV is still valid CSV" is not true if there are header rows for instance.
But really, whether or not CSV is a good format or not comes down to how much control you have over the input you'll be reading. If you have to deal with random CSV from "in the wild", it's pretty rough. If you have some sort of supplier agreement with someone that's providing the data, or you're always parsing data from the same source, it's pretty fine.
I am annoyed that comma won out as the separator. Tab would have been a massively better choice. Especially for those of us who have discovered and embraced elastic tabstops. Any slightly large CSV is unreadable and uneditable because you can't easily see where the commas are, but with tabs and elastic tabstops, the whole thing is displayed as a nice table.
(That is, of course, assuming the file doesn't contain newlines or other tabs inside of fields. The format should use \t \n etc for those. What a missed opportunity.)
I wrote a web scraper for some county government data and went for tabs as well. It's nice how the columns lined up in my editor (some of these files had hundreds of thousands of lines).
And all kinds of other weirdness, right in ascii. Vertical tabs, LOL. Put those in filenames on someone else's computer if you want to fuck with them. Linux and its common file systems are terrifyingly permissive in the character set they allow for file names.
CSV have multiple different separators.
Eg. Excel defaults to different separators based on locale. Like CZ locale, it uses commas in numbers instead of dot, so CSV uses semicolon as default separator.
At various points in my career, I've had to oversee people creating data export features for research-focused apps. Eventually, I instituted a very simple rule:
As part of code review, the developer of the feature must be able to roundtrip export -> import a realistic test dataset using the same program and workflow that they expect a consumer of the data to use. They have up to one business day to accomplish this task, and are allowed to ask an end user for help. If they don't meet that goal, the PR is sent back to the developer.
What's fascinating about the exercise is that I've bounced as many "clever" hand-rolled CSV exporters (due to edge cases) as other more advanced file formats (due to total incompatibility with every COTS consuming program). All without having to say a word of judgment.
Data export is often a task anchored by humans at one end. Sometimes those humans can work with a better alternative, and it's always worth asking!
As someone who likes modern formats like parquet, when in doubt, I end up using CSV or JSONL (newline-delimited JSON). Mainly because they are plain-text (fast to find things with just `grep`) and can be streamed.
Most features listed in the document are also shared by JSONL, which is my favourite format. It compresses really well with gzip or zstd. Compression removes some plain-text advantages, but ripgrep can search compressed files too. Otherwise, you can:
zcat data.jsonl.gz | grep ...
Another advantage of JSONL is that it's easier to chunk into smaller files.
Too bad xz/lzma isn't supported in these formats. I often get pretty big improvements in compression ratio. It's slower, but it can be parallelized too.
CSV still quietly powers the majority of the world’s "data plumbing."
At any medium+ sized company, you’ll find huge amounts of CSVs being passed around, either stitched into ETL pipelines or sent manually between teams/departments.
It’s just so damn adaptable and easy to understand.
For example, one of the CSVs my company shovels around is our Azure billing data. There are several columns that I just have absolutely no idea what the data in them is. There are several columns we discovered are essentially nullable¹ The Hard Way when we got a bill for which, e.g., included a charge that I guess Azure doesn't know what day that charge occurred on? (Or almost anything else about it.)
(If this format is documented anywhere, well, I haven't found the docs.)
Values like "1/1/25" in a "date" column. I mean, I did say it was an Azure-generated CSV, so obviously the bar wasn't exactly high, but then it never is, because anyone wanting to build something with some modicum of reliability, or discoverability, is sending data in some higher-level format, like JSON or Protobuf or almost literally anything but CSV.
If I can never see the format "JSON-in-CSV-(but-we-fucked-up-the-CSV)" ever again, that would spark joy.
(¹after parsing, as CSV obviously lacks "null"; usually, "" is a serialized null.)
I've recently written a library at work to run visitors on data models bound to data sets. One of these visitors is a CSV serializer that dumps a collection as a CSV document.
I've just checked and strings are escaped using the same mechanism for JSON, with backslashes. I should've double-checked against RFC 4180, but thankfully that mechanism isn't currently triggered anywhere for CSV (it's used for log exportation and no data for these triggers that code path). I've also checked the code from other teams and it's just handwritten C++ stream statements inside a loop that doesn't even try to escape data. It also happens to be fine for the same reason (log exportation).
I've also written serializers for JSON, BSON and YAML and they actually output spec-compliant documents, because there's only one spec to pay attention to. CSV isn't a specification, it's a bunch of loosely-related formats that look similar at a glance. There's a reason why fleshed-out CSV parsers usually have a ton of knobs to deal with all the dialects out there (and I've almost added my own by accident), that's simply not a thing for properly specified file formats.
I work as a data engineer in the financial services industry, and I am still amazed that CSV remains the preferred delivery format for many of our customers. We're talking datasets that cost hundreds of thousands of dollar to subscribe to.
"You have a REST API? Parquet format available? Delivery via S3? Databricks, you say? No thanks, please send us daily files in zipped CSV format on FTP."
Requires AWS credentials (api access token and secret key? iam user console login? sso?), AWS SDK, manual text file configuration, custom tooling, etc. I guess with Cyberduck it's easier, but still...
> Databricks
I've never used it but I'm gonna say it's just as proprietary as AWS/S3 but worse.
Anybody with Windows XP can download, extract, and view a zipped CSV file over FTP, with just what comes with Windows. It's familiar, user-friendly, simple to use, portable to any system, compatible with any program. As an almost-normal human being, this is what I want out of computers. Yes the data you have is valuable; why does that mean it should be a pain in the ass?
This is particularly funny because I just received a ticket saying that the CSV import in our product doesn't work. I asked for the CSV, and it uses a semicolon as a delimiter. That's just what their Excel produced, apparently. I'm taking their word for it because... Excel.
To me, CSV is one of the best examples of why Postel's Law is scary. Being a liberal recipient means your work never ends because senders will always find fun new ideas for interpreting the format creatively and keeping you on your toes.
Of course, because there are locales which uses comma as decimal separator. So CSV in Excel then defaults to semicolon.
Another Microsoft BS, they should defaults to ENG locale in CSV, do a translation in background. And let user choose, if they want to save as different separator. Excel in every part of world should produce same CSV by default.
Bunch of idiots.
There is a lot not to like about CSV, for all the reasons given here. The only real positive is that you can easily create, read and edit CSV in an editor.
Personally I think we missed a trick by not using the ASCII US and RS characters:
Columns separated by \u001F (ASCII unit separator).
Rows separated by \u001E (ASCII record separator).
Good idea, but probably a non-starter due to no keyboard keys for those characters. Even | would've been a better character to use since it almost never appears in common data.
In abstract CSV is great. In reality, it is a nightmare not because of CSV, but because of all the legacy tools that product it in slightly different ways (different character encodings mostly - excel still produces some variant of latin1, some tools drop a BOM in your UTF8, etc).
Unless you control the producer of the data you are stuck trying to infer the character encoding and transcoding to your destination, and there's no foolproof way of doing that.
100% agree. TSV is under-rated. Tabs don't naturally occur in data nearly as often as commas so tabs are a great delimiter. Copy paste into Excel also works much better with tabs.
Code editors may convert tabs to spaces but are you really editing and saving TSV data files in your code editor?
TSV's big advantage is that, as commonly implemented, the separators are escaped, not quoted. This means that a literal newline (ASCII 0x0A) is always a record separator and a literal tab (ASCII 0x09) is always a field separator. This is the format many databases -- including Postgres -- use for text export by default.
The problem with using TSV is different user configuration. For ex. if I use vim then Tab might indeed be a '\t' character but in TextEdit on Mac it might be something different, so editing the file in different programs can yield different formatting. While ',' is a universal char present on all keyboards and formatted in a single way
Functionally the same. I'd prefer CSV if my content was likely to have whitespace in it and didn't want billion quotes. I'd prefer TSV if my content was unlikely to have whitespace, and more likely to contain commas.
The problem with TSV is what are you going to do about quotes. Some fields might contain them [1] or they might be needed to store fields with tabs inside them.
Because of this in order to read a plain simple TSV (fields separated by tabs, nothing more) with the Python csv module [2] you need to set the quote character to an improbable value, say € (using the euro sign because HN won't let me use U+1F40D), or just parse it by hand, e.g. row.split('\t').
Agreed, much easier to work with, especially if you can guarantee no embedded tabs or newlines. Otherwise you end up with backslash escaping, but that's still usually easier than quotes.
How much easier would all of this be if whoever did CSV first had done the equivalent of "man ascii". There are all these wonderful codes there like FS, GS, RS, US that could have avoided all the hassle that quoting has brought generations of programmers and data users.
I think for "untyped" files with records, using the ASCII file, (group) and record separators (hex 1C, 1D and 1E) work nicely. The only constraint is that the content cannot contain these characters, but I found that that is generally no problem in practice. Also the file is less human readable with a simple text editor.
For other use cases I would use newline separated JSON. Is has most of the benefits as written in the article, except the uncompressed file size.
I agree that JSONL is the spiritual successor of CSV with most of the benefits and almost none of the drawbacks.
It has a downside though: wherever JSON itself is used, it tends to be a few kilobytes at least (from an API response, for example). If you collect those in a JSONL file the lines tend to get verrrry long and difficult to edit. CSV files are more compact.
JSONL files are a lot easier to work with though. Less headaches.
Honestly yes. If text editors would have supported these codes from the start, we might not even have XML, JSON or similar today. If these codes weren't "binary" and all scary, we would live in much different world.
I wonder how much we have been hindered ourselves by reinventing plain text human-readable formats over the years. CSV -> XML -> JSON -> YAML and that's just the top-level lineage, not counting all the branches everywhere out from these. And the unix folks will be able to name plenty of formats predating all of this.
I'm not really sure why "Excel hates CSV". I import into Excel all the time. I'm sure the functionality could be expanded, but it seems to work fine. The bit of the process I would like improved is nothing to do with CSV - it's that the exporting programs sometimes rearrange the order of fields, and you have to accommodate that in Excel after the import. But since you can have named columns in Excel (make the data in to a table), it's not a big deal.
One problem is that Excel uses locale settings for parsing CSV files (and, to be fair, other text files). So if you're in e.g. Europe and you've configured Excel to use commas as decimal separators, Excel imports numbers with decimals (with points as decimal separator) as text. Or it thinks the point is a thousands separator. I forgot exactly which one of those incorrect options it chooses.
I don't know what they were thinking, using a UI setting for parsing an interchange format.
There's a way around, IIRC, with the "From text / csv" command, but that looses a lot of the convenience of double-clicking a CSV file in Explorer or whatever to open it in Excel.
Excel is halfway decent if you do the 'import' but not if you just doubleclick on them. It seems to have been programmed to intentionally do stupid stuff with them if you just doubleclick on them.
In the past I remember that Excel not properly handling UTF-8 encoded text in a CSV. It would treat it as raw ASCII (or possibly code page 1252). So if you opened and saved a CSV, it would corrupt any Unicode text in the file. It's possible this has been fixed in newer versions, I haven't tried in a while.
I have repeatedly seen people getting the spreadsheets altered by excel, and in general a lot of troubles due to localisation reasons. Sometimes these changes can be subtle and be hard to spot until somebody tries to troubleshoot what went wrong down the line.
It works better if you click to "import the data" instead of just opening the csv file with it, and if you then choose the right data types. But having to do this everytime to make it work is really annoying, esp when you have a lot of columns, plus people can easily get confused with the data types. I have never seen that much confusion eg with macos's numbers.
It's an ad hoc text format that is often abused and a last-chance format for interchange. While heuristics can frequently work at determining the structure, they can just as easily frequently fail. This is especially true when dealing with dates and times or other locale-specific formats. Then, people outright abuse it by embedding arrays or other such nonsense.
You can use CSV for interchange, but a duck db import script with the schema should accompany it.
One thing that has changed the game with how I work with CSVs is ClickHouse. It is trivially easy to run a local database, import CSV files into a table, and run blazing-fast queries on it. If you leave the data there, ClickHouse will gradually optimize the compression. It's pretty magical stuff if you work in data science.
CSV on the Web (CSVW) is a W3C standard designed to enable the description of CSV files in a machine-readable way.[1]
"Use the CSV on the Web (CSVW) standard to add metadata to describe the contents and structure of comma-separated values (CSV) data files." — UK Government
Digital Service[2][3]
I wrote my own CSV parser in C++. I wasn't sure what to do in some edge cases, e.g. when character 1 is space and character 2 is a quote. So I tried importing the edge case CSV into both MS Excel and Apple Numbers. They parsed it differently!
CSV is so deceptively simple that people don't care understanding it. I wasted countless hours working around services providing non-escaped data that off the shelf parsers could not parse.
I have written a new database system that will convert CSV, JSON, and XML files into relational tables.
On of the biggest challenges to CSV files is the lack of data types on the header line that could help determine the schema for the table.
For example a file containing customer data might have a column for a Zip Code. Do you make the column type a number or a string? The first thousand rows might have just 5 digit numbers (e.g. 90210) but suddenly get to rows with the expanded format (e.g. 12345-1234) which can't be stored in an integer column.
If CSV is indeed so horrible - and I do not deny that there can be an improvement - how about the clever data people spec out a format that
Does not require a bizarre C++ RPC struct definition library _both_ to write and to read
Does not invent a clever number encoding scheme that requires native code to decode at any normal speed
Does not use a fancy compression algorithm (or several!) that you need - again - native libraries to decompress
Does not, basically, require you be using C++, Java or Python to be able to do any meaningful work with it
It is not that hard, really - but CSV is better (even though it's terrible) exactly because it does not have all of these clever dependency requirements for clever features piled onto it. I do understand the utility of RLE, number encoding etc. I do not, and will not, understand the utility of Thrift/Avro, zstandard and brotli and whatnot over standard deflate, and custom integer encoding which requires you download half of Apache Commons and libboost to decode. Yes, those help the 5% to 10% of the use cases where massive savings can be realised. It absolutely ruins the experience for the other 90 to 95.
But they also give Parquet and its ilk a very high barrier of entry.
# Mangle your data with dplyr, regular expressions, search and replace, drop NA's, you name it.
<code to sanitize all your data>
Multiple libraries exist for R to move data around, change the names of entire columns, change values in every single row with regular expressions, drop any values that have no assigned value, it's the swiss army knife of data. There are also all sorts of things you can do with data in R, from mapping with GPS coordinates to complex scientific graphing with ggplot2 and others.
Funny how the "specification holds in a tweet" yet manages to miss at least three things: 1) character encoding, 2) BOM or not, 3) header or no header.
I've found some use cases where CSV can be a good alternative to arrays for storage, search and retrieval. Storing and searching nested arrays in document databases tends to be complicated and require special queries (sometimes you don't want to create a separate collection/table when the arrays are short and 1D). Validating arrays is actually quite complicated; you have to impose limits not only on the number of elements in the array, but also on the type and size of elements within the array. Then it adds a ton of complexity if you need to pass around data because, at the end of the day, the transport protocol is either string or binary; so you need some way to indicate that something is an array if you serialize it to a string (hence why JSON exists).
Reminds me of how I built a simple query language which does not require quotation marks around strings, this means that you don't need to escape strings in user input anymore and it prevents a whole bunch of security vulnerabilities such as query injections. The only cost was to demand that each token in the query language be separated by a single space. Because if I type 2 spaces after an operator, then the second one will be treated as part of the string; meaning that the string begins with a space. If I see a quotation mark, it's just a normal quotation mark character which is part of the string; no need to escape. If you constrain user input based on its token position within a rigid query structure, you don't need special escape characters. It's amazing how much security has been sacrificed just to have programming languages which collapse space characters between tokens...
It's kind of crazy that we decided that quotation marks are OK to use as special characters within strings, but commas are totally out of bounds... That said, I think Tab Separated Values TSV are even more broadly applicable.
the simplicity is underappreciated because people don't realise how many dumb data engineers there are. i'm pretty sure most of them can't unpack an xml or json. people see a csv and think they can probably do it themselves, any other data format they think 'gee better buy some software with the integration for this'.
Excel hates CSV only if you don't use the "From text / csv" function (under the data tab).
For whatever reason, it flawlessly manages to import most CSV data using that functionality. It is the only way I can reliably import data to excel with datestamps / formats.
Just drag/dropping a CSV file onto a spreadsheet, or "open with excel" sucks.
"CSV" should die. The linked article makes critical ommisions and is wrong about some points. Goes to show just how awful "CSV" is.
For one thing, it talks about needing only to quote commas and newlines... qotes are usually fine... until they are on either side of the value. then you NEED to quote them as well.
Then there is the question about what exactly "text" is; with all the complications around Unicode, BOM markers, and LTR/RTL text.
The Italian version of Excel uses a custom CSV style with ; as a column separator. This breaks many applications that accept CSVs. It's super annoying.
>This is so simple you might even invent it yourself without knowing it already exists while learning how to program.
This is a double-edged sword. The "you might even event it yourself" simplicity means that in practice lots of different people do end up just inventing their own version rather than standardizing to RFC-4180 or whatever when it comes to "quote values containing commas", values containing quotes, values containing newlines, etc. And the simplicity means these type of non-standard implementations can go completely undetectable until a problematic value happens to be used. Sometimes added complexity that forces paying more attention to standards and quickly surfaces a diversion from those standards is helpful.
Just last week I was bitten by a customer’s CSV that failed due to Windows‘ invisible BOM character that sometimes occurs at the beginning of unicode text files. The first column‘s title is not „First Title“ then but „&zwnbsp;First Title“. Imagine how long it takes before you catch that invisible character.
Aside from that: Yes, if CSV would be a intentional, defined format, most of us would do something different here and there. But it is not, it is more of a convention that came upon us. CSV „happened“, so to say. No need to defend it more passionate than the fact that we walk on two legs. It could have been much worse and it has surprising advantages against other things that were well thought out before we did it.
CSV is the bane of my existence. There is no reason to use it outside of legacy use-cases, when so many alternatives are not so brittle that they require endless defensive hacks to avoid erring as soon as exposed to the universe. CSV must die.
CVS isn't brittle, and I'm not sure what "hacks" you're referring to. If you or your parser just follow RFC4180 (particularly quote every field, and double quoting to cancel-quote), that will get you 90%+ compatibility.
CSV is awesome for front-end webapps needing to fetch A LOT of data from a server in order to display an information-dense rendering. For that use-case, one controls both sides so the usual serialization issues aren't a problem.
There was/is CSVY [0] which attempted to put column style and separator information in a standard header. It is supported by R lang.
I also asked W3C on theirGithub if there was any spec for CSV headers and they said there isn't [1].
Kind of defeats the point of the spec in my opinion.
Look at how it handles escaping of special characters and particularly new lines (RFC 4180 doesn’t guarantee that a new line is a new record) and how it’s written in 2005 yet still doesn’t handle unicode other than via a comment about ”other character sets”.
The fact that you can parse CSV in reverse is quite cool, but you can't necessarily use it for crash recovery (as suggested) because you can't be sure that the last thing written was a complete record.
Working in clinical trial data processing I receive data in 1 of 3 formats:
csv, sas datasets, image scans of pdf pages showing spreadsheets
Of these 3 options sas datasets are my preference but I'll immediately convert to csv or excel, csv is a close 2nd once you confirm the quoting / seperator conventions it's very easy to parse. I understand why someone may find the csv format disagreeable but in my experience the alternatives can be so much worse I don't worry too much about csv files
I love CSV for a number of reasons. Not the least of which it’s super easy to write a program (code) in C to directly output all kinds of things to CSV. You can also write simple middleware to go from just about any database or just general “thing” to CSV. Very easily. Then toss CSV into excel and do literally anything you want.
It’s sort of like, the computing dream when I was growing up.
+1 to ini files. I like you can mess around with them yourself in notepad. Wish there was a general outline / structure to those though.
Items #6 to #9 sound like genuine trolling to me; item #8, reversing bytes because of course no other text encodings than ASCII exist, is particularly horrible.
the reversing bytes part is encoding agnostic. you just feed the reversed bytes to the csv parser then re-reverse both the yielded rows and the cells bytes and get the original order of the bytes themselves.
I have been just splitting my head to parse data from from a erp database to csv and then from csv to erp database again using the programming language user by erp system.
The first part of converting data to csv works fine with help of ai coding assistant.
The reverse part of csv to database is getting challenging and even claude sonnet 3.7 is not able to escape newline correctly.
I am now implementation the data format in json which is much simpler.
I think I understand the point being made, but all this reliance on text-based data means we require proper agreement on text encodings, etc. I don't think it's very useful for number-based data anyway, it's a massively bloated way to store float32s for instance and usually developers truncate the data losing about half of the precision in the process.
For numerical data, nothing beats packing floats into blobs.
I think binary formats have many advantages. Not only for numbers but other data as well, including data that contains text (to avoid needing escaping, etc; and to declare what character sets are being used if that is necessary), and other structures. (For some of my stuff I use a variant of DER, which adds a few new types such as key/value list type.)
I can‘t really understand the love for the format. Yes it’s simple but also not defined in a common spec. Same story with markdown. Yes GitHub tried to push for a spec but it still feels more like a flavor. I mean there is nothing wrong with not having a spec. But certain guarantees are not given. Will the document exported by X work with Y.
I love CSV when it's only me creating/using the CSV. It's a very useful spreadsheet/table interchange format.
But god help you if you have to accept CSVs from random people/places, or there's even minor corruption. Now you need an ELT pipeline and manual fix-ups. A real standard is way better for working with disparate groups.
Quick question while we’re on the topic of CSV files: is there a command-line tool you’d recommend for handling CSV files that are malformed, corrupted, or use unexpected encodings?
My experience with CSVs is mostly limited to personal projects, and I generally find the format very convenient. That said, I occasionally (about once a year) run into issues that are tricky to resolve.
CSV has caused me a lot of problems due to the weak type system. If I save a Dataframe to CSV and reload it, there is no guarantee that I'll end up with an identical dataframe.
I can depend on parquet. The only real disadvantages with parquet are that they aren't human-readable or mutable, but I can live with that since I can easily load and resave them.
I'm in on the "shit on microsoft for hard to use formats train" but as someone who did a LOT of .docx parsing - it turned into zen when I realized that I can just convert my docs into the easily parsed .html5 using something like pandoc.
This is a good blog post and Xan is a really neat terminal tool.
Excel won't import ISO 8601 timestamps either, which is crazy these days where it's the universal standard, and there's no excuse to use anything else.
You have to replace the "T" separator with a space and also any trailing "Z" UTC suffix (and I think any other timezone/offset as well?) for Excel to be able to parse as a time/date.
I don't get it - why the world, Excel can't just open the CSV, assume from the extension it's COMMA separated value and do the rest.
It does work slightly better when importing, just a little.
This. I got burnt by the encoding and other issues with CSV in Excel back in the day, I've only used LibreOffice Calc (on Linux) for viewing / editing CSVs for many years now, it's almost always a trouble-free experience. Fortunately I don't deal much with CSVs that Excel-wielding non-devs also need to open these days - I assume that, for most folks, that's the source of most of their CSV woes.
I just wish Excel was a little less bad about copy-pasting CSVs as well. Every single time, without fail, it dumps them into a single column. Every single time I use "text to columns" it has insane defaults where it's fixed-width instead of delimited by, you know, commas. So I change that and finally it's fixed.
Then I go do it somewhere else and have to set it up all over again. Drives me nuts. How the default behavior isn't to just put them in the way you'd expect is mind-boggling.
It makes me a bit worried to read this thread, I would've thought its pretty common knowledge why CSV is horrible and widely agreed upon. I also have hard time taking anybody seriously who uses "specification" and "CSV" in the same sentence unironically.
I suspect its 1) people who worked with legacy systems AND LIKED IT, or 2) people who never worked with legacy systems before and need to rediscover painful old lessons for themselves.
It feels like trying to convince someone, why its a bad idea to store the year as a CHAR(2) in 1999, unsuccessfully.
So easy to get data in and out of an application, opens seamlessly in Excel or your favor DB for further inspection. The only issue is the comma rather than a less used separator like | that occasionally causes issues.
Any recommendations for CSV editors on OSX? I was just looking around for this today. The "Numbers" app is pretty awful and I couldn't find any superb substitutes, only ones that were just OK.
I've said it before, CSV will still be used in 200 years. It's ugly, but it occupies an optimal niche between human readability, parsing simplicity, and universality.
JSON, XML, YAML are tree describing languages while CSV defines a single table. This is why CSV still works for a lot of things (sure there is JSON lines format of course).
and of course you can do that if you wish, as many CSV libraries allow arbitrary separators and escapes (though they usually default to the "excel compatible" format)
but at least in my case, I would not like to use those characters because they are cumbersome to work with in a text editor. I like very much to be able to type out CSV columns and rows quickly, when I need to.
with quick and dirty bash stuff ive written the same csv parser so many times it lives in my head and i can write it from memory. no other format is like that. trying to parse json without jq or a library is much more difficult
Kudos for writing this, it's always worth flagging up the utility of a format that just is what it is, for the benefit of all. Commas can also create fun ambiguity, as that last sentence demonstrates. :P
CSV is lovely. It isn't trying to be cool or legendary. It works for the reasons the author proposes, but isn't trying to go further.
I work in a work of VERY low power devices and CSV sometimes is all you need for a good time.
If it doesn't need to be complicated, it shouldn't be. There are always times when I think to myself CSV fits and that is what makes it a legend. Are those times when I want to parallelise or deal with gigs of data in one sitting. Nope. There are more complex formats for that. CSV has a place in my heart too.
Thanks for reminding me of the beauty of this legendary format... :)
I'll repeat what I say every time I talk about CSV: I have never encountered a customer who insisted on integrating via CSV who was capable of producing valid CSV. Anybody who can reliably produce valid CSV will send you something else if you ask for it.
> CSV is not a binary format, can be opened with any text editor and does not require any specialized program to be read. This means, by extension, that it can both be read and edited by humans directly, somehow.
This is why you should run screaming when someone says they have to integrate via CSV. It's because they want to do this.
Nobody is "pretending CSV is dead." It'll never die, because some people insist on sending hand-edited, unvalidated data files to your system and not checking for the outcome until mid-morning the next day when they notice that the text selling their product is garbled. Then they will frantically demand that you fix it in the middle of the day, and they will demand that your system be "smarter" about processing their syntactically invalid files.
Seriously. I've worked on systems that took CSV files. I inherited a system in which close to twenty "enhancement requests" had been accepted, implemented, and deployed to production that were requests to ignore and fix up different syntactical errors, because the engineer who owned it was naive enough to take the customer complaints at face value. For one customer, he wrote code that guessed at where to insert a quote to make an invalid line valid. (This turned out to be a popular request, so it was enabled for multiple customers.) For another customer, he added code that ignored quoting on newlines. Seriously, if we encountered a properly quoted newline, we were supposed to ignore the quoting, interpret it as the end of the line, and implicitly append however many commas were required to make the number of fields correct. Since he actually was using a CSV parsing library, he did all of this in code that would pre-process each line, parse the line using the library, look at the error message, attempt to fix up the line, GOTO 10. All of these steps were heavily branched based on the customer id.
The first thing I did when I inherited that work was make it clear to my boss how much time we were spending on CSV parsing bullshit because customers were sending us invalid files and acting like we were responsible, and he started looking at how much revenue we were making from different companies and sending them ultimatums. No surprise, the customers who insisted on sending CSVs were mostly small-time, and the ones who decided to end their contracts rather than get their shit together were the least lucrative of all.
> column-oriented data formats ... are not able to stream files row by row
I so hate CSV.
I am on the receiving end: I have to parse CSV generated by various (very expensive, very complicated) eCAD software packages. And it's often garbage. Those expensive software packages trip on things like escaping quotes. There is no way to recover a CSV line that has an unescaped double quote.
I can't point to a strict spec and say "you are doing this wrong", because there is no strict spec.
Then there are the TSV and semicolon-Separated V variants.
Did I mention that field quoting was optional?
And then there are banks, which take this to another level. My bank (mBank), which is known for levels of programmer incompetence never seen before (just try the mobile app) generates CSVs that are supposed to "look" like paper documents. So, the first 10 or so rows will be a "letterhead", with addresses and stuff in various random columns. Then there will be your data, but they will format currency values as prettified strings, for example "34 593,12 USD", instead of producing one column with a number and another with currency.
I used to be a data analyst at a Big 4 management consultancy, so I've seen an awful lot of this kind of thing. One thing I never understood is the inverse correlation between "cost of product" and "ability to do serialisation properly".
Free database like Postgres? Perfect every time.
Big complex 6-figure e-discovery system? Apparently written by someone who has never heard of quoting, escaping or the difference between \n and \r and who thinks it's clever to use 0xFF as a delimiter, because in the Windows-1252 code page it looks like a weird rune and therefore "it won't be in the data".
"Enterprise software" has been defined as software that is purchased based on the decisions of people that will not use it. I think that explains a lot.
> Big complex 6-figure e-discovery system? Apparently written by someone who has never heard of quoting...
It's because about a certain size, system projects are captured by the large consultancy shops, who eat the majority of the price in profit and management overhead...
... and then send the coding work to a lowest-cost someone who has never heard of quoting, etc.
And it's a vicious cycle, because the developers in those shops that do learn and mature quickly leave for better pay and management.
(Yes, there's usually a shit hot tiger team somewhere in these orgs, but they spend all their time bailing out dumpster fires or landing T10 customers. The average customer isn't getting them.)
Try to live in a country where "," is the decimal point. Of course this causes numerous interoperability issues or hidden mistakes in various data sets.
There would have been many better separators... but good idea to bring formatting into it as well...
There was a long period of my life that I thought .csv meant cemicolon separated because all I saw was cemicolon separated files and I had no idea of the pain.
Not sure if they still do this, but Klarna would send us ", " separated files. If there wasn't a space after the comma then it was to be read as a decimal point. Most of the CSV parser don't/didn't allow you to specify multi-character separators. In the end I just accepted that we had one field for krona and for öre and most fields would need to have a leading space removed.
There are better separators included in ASCII, but not used as often: 28 File Separator, 29 Group Separator, 30 Record Separator and 31 Unit Separator.
TSV should do it for you. Been there done that.
> Then there will be your data, but they will format currency values as prettified strings, for example "34 593,12 USD", instead of producing one column with a number and another with currency.
To be fair, that's not a problem with CSV but with the provider's lack of data literacy.
Yeah, you can also use Parquet/JSON/protobuf/XLSX and store numbers as strings in this format. CSV is just a container.
I worked in a web shop which had to produce spreadsheets which people wanted to look at in Excel. I gave them so many options, and told each client to experiment and choose the option which worked for them. In the end, we had (a) UTF-8 CSV, (b) UTF-8 CSV with BOM, (c) UTF-16 TSV, (d) UTF-8 HTML table with a .xlsx file extension and a lying Content-Type header which claimed it was an Excel spreadsheet.
Option a worked fine so long as none of the names in the spreadsheet had any non-ASCII characters.
Option d was by some measures the worst (and was definitely the largest file size), but it did seem to consistently work in Excel and Libre Office. In fact, they all worked without any issue in Libre Office.
> I can't point to a strict spec and say "you are doing this wrong", because there is no strict spec.
Have you tried RFC 4180?
https://www.ietf.org/rfc/rfc4180.txt
I've written a commercial, point and click, data wrangling tool (Easy Data Transform) that can deal with a lot of these issues:
-different delimiters (comma, semi-colon, tab, pipe etc)
-different encodings (UTF8, UTF16 etc)
-different line ending (CR, LF, CR+LF)
-ragged rows
-splitting and merging columns
And much more besides.
However, if you have either:
-line feeds and/or carriage returns in data values, but no quoting
or
-quoting, but quotes in data values aren't properly handled
Then you are totally screwed and you have my sympathies!
I agree and as a result I have completely abandoned CSV.
I use the industry standard that everyone understands: ECMA-376, ISO/IEC 29500 aka .xlsx.
Nobody has any problems producing or ingesting .xlsx files. The only real problem is the confusion between numbers and numeric text that happens when people use excel manually. For machine to machine communication .xlsx has never failed me.
Off the top of my head:
https://learn.microsoft.com/en-us/office/troubleshoot/excel/...
Now you might argue that ECMA-376 accounts for this, because it has a `date1904` flag, which has to be 0 for 1900-based dates and 1 for 1904-based dates. But what does that really accomplish if you can’t be sure that vendors understand subtleties like that if they produce or consume it? Last time I checked (maybe 8 years ago), spreadsheets created on Windows and opened on Mac still shifted dates by four years, and the bug was already over twenty years old at that time.
And the year-1904 issue is just the one example that I happen to know.
I have absolutely zero confidence in anything that has touched, or might have touched, MS Excel with anything short of a ten-foot pole.
Parsing Excel files in simple data interchange use cases that don't involve anyone manually using spreadsheets is an instance of unnecessary complexity. There are plenty of alternatives to CSV that remain plaintext, have much broader support, and are more rigorous than Excel in ensuring data consistency. You can use JSON, XML, ProtoBuf, among many other options.
I was recently writing a parser for a weird CSV. It had multiple header column rows in it as well as other header rows indicating a folder.
This RFC maybe?
https://www.ietf.org/rfc/rfc4180.txt
The RFC explicitly does not define a standard
I hate CSV too. If I have to use it, I'll live with TSV or some other special-charter delimited format.
I'd much rather it be something that is neither used in normal* text/numbers, nor whitespace, thus non-printable delimiters wins for me.
* Don't mind me extending 'normal' here to include human-written numbers with thousand seperators.
If only there were character codes specifically meant to separate fields and records.... we wouldn't have to worry so much about quoted commas or quoted quotes.
That isn't solving anything, just changing the problem. If I want to store a string containing 0x1C - 0x1F in one of the columns then we're back in the exact same situation while also losing the human readable/manually typeable aspect people seem to love about CSV. The real solution is a strict spec with mandatory escaping.
There's just no such thing as a delimiter which won't find its way into the data. Quoting and escaping really are the only robust way.
CSV is a data-exchange format.
But it is terrible at that because there is no widely adhered to standard[1], the sender and receiver often disagree on the details of what exactly a CSV is.
[1]: yes, I know about RFC 4180. But csvs in the wild often don't follow it.
The only times I hated CSV was when it came from another system I had no control over. For example Windows and their encodings, or some other proprietary BS.
But CSV under controlled circumstances is very simple.
And speaking of Wintendo, the bonus is often that you can go straight from CSV to Excel presentation for the middle management.
The post should at least mention in passing the major problem with CSV: it is a "no spec" family of de-facto formats, not a single thing (it is an example of "historically grown"). And omission of that meams I'm going to have to call this our for its bias (but then it is a love letter, and love makes blind...).
Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files, and there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa. Quoting, escaping, UTF-8 support are particular problem areas, but also that you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Having worked extensively with SGML for linguistic corpora, with XML for Web development and recently with JSON I would say programmatically, JSON is the most convenient to use regarding client code, but also its lack of types makes it useful less broadly than SGML, which is rightly used by e.g. airlines for technical documntation and digital humanities researchers to encode/annotate historic documents, for which it is very suitable, but programmatically puts more burden on developers. You can't have it all...
XML is simpler than SGML, has perhaps the broadest scope and good software support stack (mostly FOSS), but it has been abused a lot (nod to Java coders: Eclipse, Apache UIMA), but I guess a format is not responsible for how people use or abuse it. As usual, the best developers know the pros and cons and make good-taste judgments what to use each time, but some people go ideological.
(Waiting for someone to write a love letter to the infamous Windows INI file format...)
In fairness there are also several ambiguities with JSON. How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
jq supports several pseudo-JSON formats that are quite useful like record separator separated JSON, newline separated JSON. These are obviously out of spec, but useful enough that I've used them and sometimes piped them into a .json file for storage.
Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
JSON lines is not JSON It is built on top of it. .jsonl extension can be used to make it clear https://jsonlines.org/
> How do you handle multiple copies of the same key
That’s unambiguously allowed by the JSON spec, because it’s just a grammar. The semantics are up to the implementation.
Internet JSON (RRC 7493) forbids objects to have members with duplicate names.
> How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
This is also an issue, due to the way that order of keys are working in JavaScript, too.
> record separator separated JSON, newline separated JSON.
There is also JSON with no separators, although that will not work very well if any of the top-level values are numbers.
> Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
Yes, as well as non-Unicode text (including (but not limited to) file names on some systems), and (depending on the implementation) 64-bit integers and big integers. Possibly also date/time.
I think DER avoids these problems. You can specify whether or not the order matters, you can store Unicode and non-Unicode text, NaN and Infinity, raw byte arrays, big integers, and date/time. (It avoids some other problems as well, including canonization (DER is already in canonical form) and other issues. Although, I have a variant of DER that avoids some of the excessive date/time types and adds a few additional types, but this does not affect the framing, which can still be parsed in the same way.)
A variant called "Multi-DER" could be made up, which is simply concatenating any number of DER files together. Converting Multi-DER to BER is easy just by adding a constant prefix and suffix. Converting Multi-DER to DER is almost as easy; you will need the length (in bytes) of the Multi-DER file and then add a prefix to specify the length. (In none of these cases does it require parsing or inspecting or modifying the data at all. However, converting the JSON variants into ordinary JSON does require inspecting the data in order to figure out where to add the commas.)
Plus the 64-bit integer problem, really 52-bit integers, due to JS not having integers.
They do specifically mention this:
“No one owns CSV. It has no real specification (yes, I know about the controversial ex-post RFC 4180), just a set of rules everyone kinda agrees to respect implicitly. It is, and will forever remain, an open and free collective idea.”
They even seem to think it is a good thing. But I don't see how not having a bunch of implementations that can't agree on the specifics of a file/interchange format is a good thing. And being free and open is completely orthogonal. There are many proprietary formats that don't have a spec, and many open formats that do have a spec (like, say, json).
That's true of the vast majority of protocols that people use in real life to exchange date. We're using one of them right now, in fact.
Waiting for someone to write a love letter to the infamous Windows INI file format
I actually miss that. It was nice when settings were stored right alongside your software, instead of being left behind all over a bloated registry. And the format was elegant, if crude.
I wrote my own library for encoding/writing/reading various datatypes and structure into ini's, in a couple different languages, and it served me well for years.
TOML is nice like that... elegant like INI, only with lists.
> instead of being left behind all over a bloated registry
Really? I think the idea of a central, generic, key-value pair database for all the setting on a system is probably the most elegant reasonable implementation there could be.
The initial implementation of Windows Registry wasn't good. It was overly simplistic and pretty slow. Though the "bloat" (what ever that means) of registry hasn't been an actual issue in over 20 years. The only people invested in convincing you "it's an issue" are CCleaner type software that promise to "speed up your computer" if you just pay $6.99.
How many rows do you need in a sqlite database for it to be "bloated"?
I feel like YAML is a spiritual successor to the .ini, since it shares a notable ideal of simple human readability/writability.
The post does mention it, as a positive:
https://github.com/medialab/xan/blob/master/docs/LOVE_LETTER...
People who say that CSV is "simpler" are talking about whatever format Excel exports.
Also these people have only ever had to deal with the American Excel localization.
So yeah, with the caveat of "only ever use Excel and only ever the American edition" CSV is pretty nice.
As someone living in a country where , is used as the decimal separator, I cannot begin to describe the number of times CSV data has caused me grief. This becomes especially common in an office environment where Excel is the de facto only data handling tool that most people can and will use. Here the behavior of loading data becomes specific to the individual machine and changes over time (e.g. when IT suddenly forces a reset of MS Office application languages to the local one).
That said, I don't really know of any alternative that won't be handled even worse by my colleagues...
Also keeping in mind all the locales where comma is the decimal point…tsv for the world.
To be honest, I'm wondering why you are rating JSON higher than CSV.
> Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files,
There is, actually, RFC 4180 IIRC.
> there are many flavours that are incompatible with each other in the sense that a reader for one flavour would not be suitable for reading the other and vice versa.
"There are many flavours that deviate from the spec" is a JSON problem too.
> you cannot tell programmatically whether line 1 contains column header names or already data (you will have to make an educated guess but there ambiguities in it that cannot be resolved by machine).
Also a problem in JSON
> Quoting, escaping, UTF-8 support are particular problem areas,
Sure, but they are no more nor no less a problem in JSON as well.
Have you had to work with csv files from the wild much? I'm not being snarky but what you're talking about is night and day to what I've experienced over the years.
There aren't vast numbers of different JSON formats. There's practically one and realistically maybe two.
Headers are in each line, utf8 has never been an issue for me and quoting and escaping are well defined and obeyed.
This is because for datasets, almost exclusively, the file is machine written and rarely messed with.
Csv files have all kinds of separators, quote characters, some parsers don't accept multi lines and some do, people sort files which mostly works until there's a multi line. All kinds of line endings, encodings and mixed encodings where people have combined files.
I tried using ASCII record separators after dealing with so many issues with commas, semicolons, pipes, tabs etc and still data in the wild had these jammed into random fields.
Lots of these things don't break when you hit the issue either, the parsers happily churn on with garbage data, leading to further broken datasets.
Also they're broken for clients if the first character is a capital I.
> There is, actually, RFC 4180 IIRC.
Does any software fully follow that spec (https://www.rfc-editor.org/rfc/rfc4180)? Some requirements that I doubt are commonly followed:
- “Each record is located on a separate line, delimited by a line break (CRLF)” ⇒ editing .csv files using your the typical Unix text editor is complicated.
- “Spaces are considered part of a field and should not be ignored”
- “Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes” ⇒ fields containing lone carriage returns or new lines need not be enclosed in double quotes.
> Unlike XML or JSON, there isn't a document defining the grammar of well-formed or valid CSV files
There is such a document: RFC 4180. It may not be a good document, but it does exist.
INI was for a long time a seemingly preferable format in the Python community for configuration for a long time, as I recall it.
Haven’t been a full time Python dev in sometime though, it seems TOML has supplanted that, but I remember thinking how interesting it was that Python had a built in INI parser and serializer
I lived through SOAP/WSDL horror with their numerous standards and the lack of compatibility between stacks in different programming languages. Having seen abused XML, CSV formats. CSV is preferable over XML. Human-readability matters. Relative simplicity matters.
Despite JSON may also be interpreted differently by different tools, it is a good default choice for communicating between programs
> lack of compatibility between stacks in different programming languages
Well, that sure beats OpenAPI lack of compatibility between stacks in the same programming language.
I think the fact one can't randomly concatenate strings and call it "valid XML" a huge bonus over the very common "join strings with comma and \r\n", non-rfc4180 compliant (therefore mostly unparseable without human/LLM interaction) garbage people often pretend is CSV.
While CSV isn't exactly grammared or standardised like XML I think if it as more schema:d than JSON. There might be data corruption or consistency issues, but there is implicitly a schema: every line is exactly n fields, and the first line might contain field names.
When a JSON API turns out to have optional fields it usually shows through trial and error, and unlike CSV it's typically not considered a bug you can expect the API owner to fix. In CSV 'missing data' is an empty string rather than nulls or their cousins because missing fields aren't allowed, which is nice.
I also like that I can write my own ad hoc CSV encoder in most programming languages that can do string concatenation, and probably also a suitable decoder. It helps a lot in some ETL tasks and debugging. Decent CSV also maps straight to RDBMS tables, if the database for some reason fails at immediate import (e.g. too strict expectations) into a newly created table it's almost trivial to write an importer that does it.
JSON is not schema’d per se and intentionally so. There’s jsonschema which has better expressiveness than inference of a tabular schema, as it can reflect relationships.
There is no file format that works out of box under all extreme corner cases.
You would think that ie XML-defined WSDL with XSD schema is well battle proven. I've encountered 2 years ago (and still dealing with that) WSDL from a major banking vendor that is technically valid, but no open source library in Java (from all languages) was able to parse it successfully or generate binding classes out of box.
Heck, flat files can end up with extreme cases, just work enough with legacy banking or regulatory systems and you will see some proper shit.
The thing is, any sort of critical integration needs to be battle tested and continuously maintained, otherwise it will eventually go bad, even a decade after implementation and regular use without issues.
What about S-expressions? Where do they break?
XML is a pretty good markup language. Using XML to store structured data is an abuse of it. All the features that are useful for markup are not useful for structured data and only add to the confusion.
> Waiting for someone to write a love letter to the infamous Windows INI file format...
Honestly, it’s fine. TOML is better if you can use it, but otherwise for simple applications, it’s fine. PgBouncer still uses INI, though that in particular makes me twitch a bit, due to discovering that if it fails to parse its config, it logs the failed line (reasonable), which can include passwords if it’s a DSN string.
Well, once you get over the fact that information on a TOML file can be out of order in any place, denominated by any mix of 3 different key encodings, and broken down in any random way... then yes, the rest of TOML is good.
I should write a love letter to JSON.
It does mention this. Point 2.
why you hate csv, not the program that is not able to properly create csv?
CSV is ever so elegant but it has one fatal flaw - quoting has "non-local" effects, i.e. an extra or missing quote at byte 1 can change the meaning of a comma at byte 1000000. This has (at least) two annoying consequences:
1. It's tricky to parallelise processing of CSV. 2. A small amount of data corruption can have a big impact on the readability of a file (one missing or extra quote can bugger the whole thing up).
So these days for serialisation of simple tabular data I prefer plain escaping, e.g. comma, newline and \ are all \-escaped. It's as easy to serialise and deserialise as CSV but without the above drawbacks.
JSON serialized without extra white space with one line per record is superior to CSV.
If you want CSV-ish, enforce an array of strings for each record. Or go further with actual objects and non-string types.
You can even jump to an arbitrary point and then seek till you see an actual new line as it’s always a record boundary.
It’s not that CSV is an invalid format. It’s that libraries and tools to parse CSV tend to suck. Whereas JSON is the lingua franca of data.
> It’s that libraries and tools to parse CSV tend to suck. Whereas JSON is the lingua franca of data.
This isn't the case. An incredible amount of effort and ingenuity has gone into CSV parsing because of its ubiquity. Despite the lack of any sort of specification, it's easily the most widely supported data format in existence in terms of tools and language support.
JSON is a textual encoding no different than CSV.
It's just that people tend to use specialized tools for encoding and decoding it instead of like ",".join(row) and row.split(",")
I have seen people try to build up JSON strings like that too, and then you have all the same problems.
So there is no problem with CSV except that maybe it's too deceptively simple. We also see people trying to build things like URLs and query strings without using a proper library.
> JSON [...] with one line per record
Couple of standards that I know of that does this, primarily intended for logging:
https://jsonlines.org/
https://clef-json.org/
Really easy to work with in my experience.
Sure some space is usually wasted on keys but compression takes care of that.
Until you have a large amount of data & need either random access or to work on multiple full columns at once. Duplicated keys names mean it's very easy for data in jsonlines format to be orders of magnitude larger than the same data as CSV, which is incredibly annoying if your processing for it isn't amenable to streaming.
You serialize the keys on every row which is a bit inefficient but it’s a text format anyway
This is simply not true, parsing json v csv is a difference of thousands of lines.
Eh, it really isn't. The format does not lend itself to tabular data, instead the most natural way of representing data involves duplicating the keys N times for each record.
What happens when you need to encode the newline character in your data? That makes splitting _either_ CSV or LDJSON files difficult.
That would be solved by using the ASCII control chars Record Separator / Unit Separator! I don't get how this is not widely used as standard.
I remembered seeing a comment like this before, and...
comment: https://news.ycombinator.com/item?id=26305052
comment: https://news.ycombinator.com/item?id=39679662
"ASCII Delimited Text – Not CSV or Tab Delimited Text" post [2014]: https://news.ycombinator.com/item?id=7474600
same post [2024]: https://news.ycombinator.com/item?id=42100499
comment: https://news.ycombinator.com/item?id=15440801
(...and many more.) "This comes up every single time someone mentions CSV. Without fail." - top reply from burntsushi in that last link, and it remains as true today as in 2017 :D
You're not wrong though, we just need some major text editor to get the ball rolling and start making some attempts to understand these characters, and the rest will follow suit. We're kinda stuck at a local optimum which is clearly not ideal but also not troublesome enough to easily drum up wide support for ADSV (ASCII Delimiter Separated Values).
The _entire_ point of a CSV file is that it's fully human readable and write-able.
The characters you mention could be used in a custom delimiter variant of the format, but at that point it's back to a binary machine format.
If there were visible well known characters that could be printed for those and keys on a keyboard for inputting them we would probably have RSV files. Because they are buried down in the nonprintable section of the ASCII chart they are a pain for people to deal with. All it would have taken is one more key on the keyboard, maybe splitting the tab key in half.
Can't type them on a keyboard I guess, or generally work with them in the usual text-oriented tools? Part of the appeal of CSV is you can just open it up in Notepad or something if you need to. Maybe that's more a critique of text tools than it is of ASCII record separator characters.
I was really excited when I learned of these characters, but ultimately if it’s in ASCII then it’s in-band and will eventually require escaping leading to the same problem.
But what if one of your columns contains arbitrary binary data?
Perhaps the popularity,or lack thereof? More often than not, the bad standard wins the long term market
> I don't get how this is not widely used as standard.
It requires bespoke tools for edition, and while CSV is absolute garbage it can be ingested and produced by most spreadsheet software, as well as databases.
Reminds me of a fatal flaw of yaml. Turns out truncating a yaml file doesn't make it invalid. Which can lead to some rather non-obvious failures.
What is the failure mode where a yaml file gets truncated? They are normally config files in Git. Or uploaded to S3 or Kubernetes etc.
CSV has the same failure mode. As does HTML. (But not XML)
I couldn't find the story on it, but there was an instance of a config for some major service getting truncated, but since it was yaml it was more difficult to figure out that that was what happened. I think in AWS, but I can't find the story, so can't really remember.
And fully fair that you can have similar issues in other formats. I think the complaint here was that it was a bit harder, specifically because it did not trip up any of the loading code. With a big lesson learned that configs should probably either go pascal string style, where they have an expected number of items as the first part of the data, or xml style, where they have a closing tag.
Really, it is always amusing to find how many of the annoying parts of XML turned out to be somewhat more well thought out than people want to admit.
Bad merges.
Same is true of CSV/TSV.
I think you are a bit more likely to notice in a CSV/TSV, as it is unlikely to truncate at a newline?
Still, fair point. And is part of why I said it is a flaw, not the flaw. Plenty of other reasons to not like YAML, to me. :D
Not if it is split at a line e.g. if the source or target can only deal with a fixed number of lines.
Right, that is what I meant about that being unlikely? Most instances of truncated files that I have seen were because of size, not lines.
Still, a fair point.
Really depends on how the CSV is generated/transferred. If the output of the faulting software is line-buffered then it's quite likely that a failure would terminate the file at a line break.
I want to push Sqlite as a data interchange format! it has the benefit of being well defined, and can store binary data, like images for product pictures inside the database. not a good idea if you're trying to serve users behind a web app, but as interchange, better than a zip file with filenames that have to be "relinked".
For context: I have a LOT of experience of interchange formats, like "full time job, every day, all day, hundreds of formats, for 20-years" experience.
Based on that experience I have come to one key, but maybe, counter-intuitive truth about interchange formats:
- Too much freedom is bad.
Why? Generating interchange data is cheaper than consuming it, because the creator only needs to consider the stuff they want to include, whereas the consumer needs to consider every single possible edge case and or scenario the format itself can support.
This is why XML is WAY more costly to ingest than CSV, because in XML someone is going to use: attributes, CDATA, namespaces, comments, different declaration, includes, et al. In CVS they're going to use rows, a format separator, and quotes (with or without escaping). That's it. That's all it supports.
Sqlite as an interchange format is a HORRIFYING suggestion, because every single feature Sqlite supports may need to be supported by consumers. Even if you curtailed Sqlite's vast feature set, you've still created something vastly more expensive to consume than XML, which itself is obnoxious.
My favorite interchange formats are, in order:
- CVS, JSON (inc. NDJSON), YAML, XML, BSON (due to type system), MessagePack, Protobuf, [Giant Gap] Sqlite, Excel (xlsx, et al)
More features mean more cost, more edge cases, more failures, more complex consumers. Keep in mind, this is ONLY about interchange formats between two parties, I have wildly different opinions about what I would use for my own application where I am only ever the creator/consumer, I actually love Sqlite for THAT.
Oh, this is interesting. Are you tying different systems together? If so, do you use some preferred intermediate format? Do you have a giant library of * -> intermediate -> * converters that you sprinkle between everything? Or maybe the intermediate format is in memory?
What about Parquet and the like?
Not the person you were replying to, but from my experience CSV is a good place to define data types coming into a system and a safe way to dump data as long as I write everything down.
So I might do things like have every step in a pipeline begin development by reading from and writing to CSV. This helps with parallel dev work and debugging, and is easy to load into any intermediate format.
> do you use some preferred intermediate format?
This is usually dictated by speed vs money calculations, weird context issues, and familiarity. I think it's useful to look at both "why isn't this a file" and "why isn't this all in memory" perspectives.
For tabular/time-series greater than 100k rows I personally feel like parquet cannot be beat. It's self-describing, strongly-typed, relatively compact, supports a bunch of io/decode skipping, and is quite fast.
Orc also looks good, but isn't well supported. I think parquet is optimal for now for most analytical use-cases that don't require human readability.
It is an interchange format, so it is inter-system by virtue of that. If I am a self-creator/consumer the format I use can be literally anything even binary memory dumps.
I love the wisdom in this comment!
I’m not sure you need to support every SQLite feature. I’m unconvinced of binary formats, but the .dump output is text and simple SQL.
Interesting! I've dealt with file interchange between closed source (and a couple open source) programs, but that was a while ago. I've also had to deal with csvs and xslts between SaaS vendors for import export of customer's data. I've done a bunch of reverse engineering of proprietary formats so we could import the vendor's files, which had more information than they were willing to export in an interchange format. Sometimes they're encrypted and you have to break it.
What you say is fair. Csv is underspecified though, there's no company called csv that's gonna sue for trademark enforcement, there's no official csv standard library that everyone uses. (They exist are some but there are so many naive importations because from first principles, because how hard could it be? output records and use a comma and newline (of which there are three possible options)).
How often do you deal with multiple Csv files to represent multiple tables that are actually what's used by vendors internally, vs one giant flattened Csv with hundreds of columns and lots of empty cells? I don't have your level of experience with csvs, but I've dealt with a them being a mess, where the other side implement whatever they think is reasonable given the name "comma separated values".
With sqlite, we're in the Internet age and so I presume this hypothetical developer would use the sqlite library and not implement their own library from scratch for funsies. This then leads to types, database normalization, multiple tables. I hear you that too many choices can bad, and xml is a great example of this, but sqlite isn't xml and isn't Csv.
It's hard to have this discussion in the abstract so I'll be forthcoming about where I'm coming from, which is Csv import export between vendors for stores, think like Doordash to UberEATS. the biggest problem we have is images of the items, and how to deal with that. It's an ongoing issue how to get them, but the failure mode, which does happen, is that when moving vendor, they just have to redo a lot of work that they shouldn't have to.
Ultimately the North Star I want to push towards is moving beyond csvs, because it'll let a people who currently have to hand edit the Csv so every row imports properly, not have to do that. They'd still exist, but instead have to deal with, well, what you see with XML files. which has its shortcomings, as you mention, but at least once how a vendor is using it is understood, individual records are generally understandable.
I was moved so I don't deal with import export currently, but it's because sqlite is so nice to work with on personal projects where it's appropriate that I want to push the notion of moving to sqlite over csvs.
One very minor problem is that you max out storing blobs of 2GB(? I think, maybe 4GB). Granted few will hit this, but this limit did kill one previous data transfer idea of mine.
> not a good idea if you're trying to serve users behind a web app
I use Sqlite for a static site! Generating those static pages out to individual pages would involve millions of individual files. So instead I serve up a sqlite database over http, and use a sqlite wasm driver [0] to load (database) pages as needed. Good indexing cuts down on the number of pages it grabs, and I can even get full text search!
Only feature I'm missing is compression, which is complicated because for popular extensions like sqlite-zstd written in Rust.
[0] https://github.com/mmomtchev/sqlite-wasm-http
I don't understand why CSV became a thing when TSV, or a format using the nowadays weird ASCII control characters like start/end of text, start of heading, horizontal/vertical tab, file/group/record/unit separator.
It seems many possible designs would've avoided the quoting chaos and made parsing sort of trivial.
Any time you have a character with a special meaning you have to handle that character turning up in the data you're encoding. It's inevitable. No matter what obscure character you choose, you'll have to deal with it
It's evitable by stating the number of bytes in a field and then the field. No escaping needed and faster parsing.
But not human editable/readable
I understand this argument in general. But basically everyone has some sort of spreatsheet application that can read CSV installed.
In some alternate worked where this "binary" format caught on it would be a very minor issue that it isn't human readable because everyone has a tool that is better at reading it than humans are. (See the above mentioned non-local property of quotes where you may think you are reading rows but are actually inside a single cell.)
Makes me also wonder if something like CBOR caught on early enough we would just be used to using something like `jq` to read it.
https://github.com/wader/fq is "jq for binary formats."
Except we have all these low ASCII characters specifically for this purpose that don't turn up in the data at all. But there is, of course, also an escape character specifically for escaping them if necessary.
Even if you find a character that really is never in the data - your encoded data will contain it. And it's inevitable that someone encodes the encoded data again. Like putting CSV in a CSV value.
You can't type any of those on a typewriter, or see them in old or simple simple editors, or no editor like just catting to a tty.
If you say those are contrived examples that don't matter any more then you have missed the point and will probably never acknowledge the point and there is no purpose in continuing to try to communicate.
One can only ever remember and type out just so many examples, and one can always contrive some response to any single or finite number of examples, but they are actually infinite, open-ended.
Having a least common denominator that is extremely low that works in all the infinite situations you never even thought of, vs just pretty low and pretty easy to meet in most common situations, is all the difference in the world.
The difference is that the coma and newline characters are much more common in text than 0x1F and 0x1E, which if you restrict your data to alphanumeric characters (which you really should) will never appear anywhere else.
Exactly. "Use a delimiter that's not in the data" is not real serialisation, it's fingers-crossed-hope-for-the-best stuff.
I have in the past does data extractions from systems which really can't serialise properly, where the only option is to concat all the fields with some "unlikely" string like @#~!$ as a separator, then pick it apart later. Ugh.
> Exactly. "Use a delimiter that's not in the data" is not real serialisation, it's fingers-crossed-hope-for-the-best stuff.
It's not doing just this, you pick something that's likely not in the data, and then escape things properly. When writing strings you can write a double quote within double quotes with \", and if you mean to type the designated escape character you just write it twice, \\.
The only reason you go for something likely not in the data is to keep things short and readable, but it's not impossible to deal with.
I agree that it's best to pick "unlikely" delimiters so that you don't have to pepper your data with escape chars.
But some people (plenty in this thread) really do think "pick a delimiter that won't be in the data" - and then forget quoting and/or escaping - is a viable solution.
The characters would likely be unique, maybe even by the spec.
Even if you wanted them, we use backslashes to escape strings in most common programming languages just fine, the problem CSV is that commas aren't easy to recognize because they might be within a single or double quote string, or might just be a separator.
Can strings in CSV have newlines? I bet parsers disagree since there's no spec really.
In TSV as commonly implemented (for example, the default output format of Postgres and MySQL), tab and newline are escaped, not quoted. This makes processing the data much easier. For example, you can skip to a certain record or field just by skipping literal newlines or tabs.
It's a lot easier to type comma than control characters and it's a lot easier to view comma than a tab (which might look like a space).
For automated serialization, plain text formats won out, because they are easy to implement a minimal working solution (both import and export) and more importantly, almost all systems agree on what plain text is.
We don't really have Apple formatted text, that will show up as binary on windows. Especially if you are just transferring id's and numbers, those will fall within ascii and that will work even if you are expecting unicode.
The ASCII control characters do not appear well or are editable in a plain text editor.
I did always use TSV and I think the original use of CSV could have used that.
But TSV would still have many issues.
What issues would TSV have? As commonly implemented (for example, the default output format of Postgres and MySQL), in TSV, tab and newline are escaped, not quoted.
I don't understand why CSV became a thing in the 70s when S-expressions existed since at least the 50s and are better in practically every way.
CSV's actual problem is that there's no single CSV, and you don't know what type you have (or even if you have single consistent type through the whole file) without trying to parse the whole file and seeing what breaks. Is there quoting? Is that quoting used consistently? Do you have five-digit ZIP codes, or have the East Coast ones been truncated to four digits because they began with zero? Spin the wheel!
https://www.ietf.org/rfc/rfc4180.txt
> So these days for serialisation of simple tabular data I prefer plain escaping, e.g. comma, newline and \ are all \-escaped. It's as easy to serialise and deserialise as CSV but without the above drawbacks.
For my own parser, I made everything `\` escaped: outside of a quote or double-quote delimited string, any character prefixed with a `\` is read verbatim. There are no special exceptions resulting in `\,` producing a comma while `\a` produces `\a`. This makes it a good rule, because it is only one rule with no exceptions.
I considered this but then went the other way - a \ before anything other than a \, newline or comma is treated as an error. This leaves room for adding features, e.g. \N to signify a SQL NULL.
Regarding quoting and escaping, there are two options that make sense to me - either use quoting, in which case quotes are self-escaped and that's that; or use escaping, in which case quotes aren't necessary at all.
A good way to parallelize CSV processing is to split datasets into multiple files, kinda like manual sharding. xan has a parallel command able to perform a wide variety of map-reduce tasks on splitted files.
https://github.com/medialab/xan
nice .. xsv is also very handy for wrangling csv files generally
xan is a maintained fork of xsv
I always treat CSVs as comma separated values with new line delimiters. If it’s a new line, it’s a new row.
Do you ever have CSV data that has newlines within a string?
I don't. If I ever have a dataset that requires newlines in a string, I use another method to store it.
I don't know why so many people think every solution needs to to be a perfect fit for every problem in order to be viable. CSV is good at certain things, so use it for those things! And for anything it's not good at, use something else!
> use something else
You don't always get to pick the format in which data is provided to you.
True, but in that case I'm not the one choosing how to store it, until I ingest the data, and then I will store it in whatever format makes sense to me.
I don't think we do? It's more that a bunch of companies already have their data in CSV format and aren't willing to invest any effort in moving to a new format. Doesn't matter how much one extolls all the benefits, they know right? They're paying someone else to deal with it.
No - that’s what I’m trying to say. If I have newlines I use something else.
Wouldn't work if csv is used as a exchange format with external companies.
Of course. I’m not saying I roll my own parser for every project that uses a CSV file, I’m just describing my criteria for using CSV vs some other format when I have the option.
Eh, all you're really saying is "I'm not using CSV. Instead I'm using my CSV." Except that's all that anybody does.
CSV can just as easily support escaping as any other format, but there is no agreement for a CSV format.
After all, a missed escape can just as easily destroy a JSON or XML structure. And parallel processing of text is already a little sketchy simply because UTF-8 exists.
How is this not true for every format that includes quote marks?
It is true for everything that uses quoting, I didn't mean to imply otherwise.
I'm not clear why quotes prevent parallel processing?
I mean, you don't usually parallelize reading a file in the first place, only processing what you've already read and parsed. So read each record in one process and then add it to a multiprocessing queue for multiple processes to handle.
And data corruption is data corruption. If a movie I'm watching has a corrupted bit I don't mind a visual glitch and I want it to keep playing. But with a CSV I want to fix the problem, not ignore a record.
Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
> I'm not clear why quotes prevent parallel processing?
Because of the "non-local" effect of quotes, you can't just jump into the middle of a file and start reading it, because you can't tell whether you're inside a quoted section or not. If (big if) you know something about the structure of the data, you might be able to guess. So that's why I said "tricky" instead of "impossible".
Contrast to my escaping-only strategy, where you can jump into the middle of a file and fully understand your context by looking one char on either side.
> Do you really have a use case where reading itself is the performance bottleneck and you need to parallelize reading by starting at different file offsets? I know that multiple processes can read faster from certain high-end SSD's than just one process, but that's a level of performance optimization that is pretty extraordinary. I'm kind of curious what it is!
I used to be a data analyst at a management consultancy. A very common scenario would be that I'm handed a multi-gigabyte CSV and told to "import the data". No spec, no schema, no nothing. Data loss or corruption is totally unacceptable, because we were highly risk-sensitive. So step 1 is to go through the whole thing trying to determine field types by testing them. Does column 3 always parse as a timestamp? Great, we'll call it a timestamp. That kind of thing. In that case, it's great to be able to parallelise reading.
> And data corruption is data corruption
Agreed, but I prefer data corruption which messes up one field, not data corruption which makes my importer sit there for 5 minutes thinking the whole file is a 10GB string value and then throw "EOF in quoted field".
Doing sequential reading into a queue for workers to read is a lot more complicated than having a file format that supports parallel reading.
And the fix to allow parallel reading is pretty trivial: escape new lines so that you can just keep reading until the first unescaped new line and start at that record.
It is particularly helpful if you are distributing work across machines, but even in the single machine case, it's simpler to tell a bunch of workers their offset/limit in a file.
The practical solution is to generate several CSV files and distribute work at the granularity of files
Sure, now you need to do this statically ahead of time.
It's not unsolvable, but now you have a more complicated system.
A better file format would not have this problem.
The fix is also trivial (escape new lines into \n or similar) would also make the files easier to view with a text editor.
But in practice, you’ll receive a bag of similar-format CSVs.
Tab-Separated Value, as implemented by many databases, solves these problems, because tab, newline and other control characters are escaped. For example, the default text serialization format of Postgres (`COPY <table> TO '<file>'` without any options) is this way.
Anyone with a love of CSV hasn't been asked to deal with CSV-injection prevention in an enterprise setting, without breaking various customer data formats.
There's a dearth of good resources about this around the web, this is the best I've come across: https://georgemauer.net/2017/10/07/csv-injection.html
That mostly breaks down to "excel is intentionally stupid with csv files if you don't use the import function to open them" along with the normal "don't trust customer input without stripping or escaping it" concerns you'd have with any input.
That was my initial reaction as well – it's a vulnerability in MS software, not ours, not our problem. Unfortunately, reality quickly came to bear: our customers and employees ubiquitously use excel and other similar spreadsheet software, which exposes us and them to risk regardless where the issue lies. We're inherently vulnerable because of the environment we're operating in, by using CSV.
"don't trust customer input without stripping or escaping it" feels obvious, but I don't think it stands up to scrutiny. What exactly do you strip or escape when you're trying to prevent an unknown multitude of legacy spreadsheet clients that you don't control from mishandling data in an unknown variety of ways? How do you know you're not disrupting downstream customer data flows with your escaping? The core issue, as I understand it, stems from possible unintended formula execution – which can be prevented by prefixing certain cells with a space or some invisible character (mentioned in the linked post above). This _does_ modify customer data, but hopefully in a way that unobtrusive enough to be acceptable. All in all, it seems to be a problem without a perfect solution.
Hey, I'm the author of the linked article, cool to see this is still getting passed around.
Definitely agree there's no perfect solution. There's some escaping that seems to work ok, but that's going to break CSV-imports.
An imperfect solutions is that applications should be designed with task-driven UIs so that they know the intended purpose of a CSV export and can make the decision to escape/not escape then. Libraries can help drive this by designing their interfaces in a similar manner. Something like `export_csv_for_eventual_import()`, `export_csv_for_spreadsheet_viewing()`.
Another imperfect solution would be to ... ugh...generate exports in Excel format rather than CSV. I know, I know, but it does solve the problem.
Or we could just get everyone in the world to switch to emacs csv-mode as a csv viewer. I'm down with that as well.
Appreciate your work! Your piece was pivotal in changing my mind about whether this should be considered in our purview to address.
The intention-based philosophy of all this makes a lot of sense, was eye opening, and I agree it should be the first approach. Unfortunately after considering our use cases, we quickly realized that we'd have no way of knowing how customers intend to use the csv exports they've requested - we've talked to some of them and it's a mix. We could approach things case by case but we really just want a setup which works well 99% of the time and mitigates known risk. We settled on the prefixing approach and have yet to receive any complaints about it, specifically using a space character with the mind that something unobtrusive (eg. easily strippable) but also visible, would be best - to avoid quirks stemming from something completely hidden.
Thank again for your writing and thoughts, like I said above I haven't found much else of quality on the topic.
>Another imperfect solution would be to ... ugh...generate exports in Excel format rather than CSV. I know, I know, but it does solve the problem.
Or you could just use the ISO standard .xlsx, which is a widely supported format that is not Excel specific but has first class support in Excel.
>Another imperfect solution would be to ... ugh...generate exports in Excel format rather than CSV. I know, I know, but it does solve the problem.
That's honestly a good solution and the end users prefer it anyway.
I’ve almost always found the simple way around Excel users not knowing how to safely use CSV files is to just give the file another extension: I prefer .txt or .dat
Then, the user doesn’t have Excel has the default program for opening the file and has to jump through a couple safety hoops
>Then, the user doesn’t have Excel has the default program for opening the file and has to jump through a couple safety hoops
Nah, they just quickly learn to rename it .csv or .xls and excel will open it
If your customers and employees are using Excel then stop going against the grain with your niche software developer focused formats that need a lot of explanations.
I need to interface with a lot of non-technical people who exclusively use Excel. I give them .xlsx files. It's just as easy to export .xlsx as it is to export .CSV and my customers are happy.
How is .csv a niche dev-focused format? Our customers use our exports for a mix of purposes, some of them involving spreadsheet clients (not just excel) and some of them integrating with their own data pipelines. Csv conveniently works with these use cases across the board, without explanation, and is inconveniently saddled with these legacy security flaws in Excel (and probably other clients).
If xlsx works for all your use cases that's great, a much better solution that trying to sidestep these issues by lightly modifying the data. It's not an option for us, and (I'd imagine) a large contingent of export tools which can't make assumptions about downstream usage.
Someone filed a bug report on a project I work on, saying that it was a security vulnerability that we don't prefix cell values with a single quote (') when the cell content contains certain values like an equal sign (=). They said this can cause Excel to evaluate the content and potentially run unsafe code.
I responded that this was Excel's problem, not ours, and that nobody would assign a CVE to our product for such a "vulnerability". How naive I was! They forwarded me several such CVEs assigned to products that create CSVs that are "unsafe" for Excel.
Terrible precedent. Ridiculous security theater.
There are a lot of these sorts of bug reports running around, to the point that Google's bug bounty program has classified them as invalid: https://bughunters.google.com/learn/invalid-reports/google-p...
I agree with the characterization ("security theater") of these bug reports. The problem is that the intentions of these reports don't make the potential risk less real, depending on the setting, and I worry that the "You're just looking for attention" reaction (a very fair one!) leads to a concerning downplaying of this issue across the web.
As a library author, I agree this very well may not be something that needs to be addressed. But as someone working in a company responsible for customers, employees, and their sensitive information, disregarding this issue disregards the reality of the tools these people will invariably use, downstream of software we _are_ responsible for. Aiming to make this downstream activity as safe as possible seems like a worthy goal.
The next version of CVSS needs to add a metric for these kind of bullshit non-vulnerabilities so that we can ignore them at source.
I didn't know about the formula injection, I just knew that Excel and Sheets mangle my dates every time and it drives me bonkers. Why is that the default? It makes no sense.
The best part about csv, anyone can write a parser in 30 minutes meaning that I can take data from the early '90s and import it into a modern web service.
The worst part about CSV, anyone can ride a parser in about 30 minutes, meaning that it's very easy to get incorrect implementations, incorrect data, and other strange undefined behaviors. But to be clear json, and yaml also have issues with everyone trying to reinvent the wheel constantly. XML is rather ugly, but it seems to be the most resilient.
until you find someone abusing XSD schemas, or someone designing a "dynamically typed" XML... or sneaks in extra data in comments - happened to me way often than it should.
You know what grinds my gears about using XSD for message definitions? Namespaces. Namespaces are a good idea and were done well in XML, as far as I can see, but with XSD you run into this [problem][1]:
Namespaces are used to qualify tags and attributes in XML elements. But they're also used by XSD to qualify the names of types defined in the schema. A sequence element's type is indicated by the value of its "type" attribute. The attribute value is a string that is the namespace-qualified name of the type.
So, if you want to change the alias of an XML namespace in an XSD schema, you can't just use your XML library's facilities for namespace management. You also have to go find the "type" attributes (but not all of the "type" attributes), parse their values, and do the corresponding alias change in the type name.
Don't use a string for a thing that is not a string! I guess in XML attributes you have no choice. XAML improved on the situation a bit.
[1]: https://github.com/dgoffredo/stag/tree/master/src/stag/xsd-u...
My condolences. Any open standard runs the risk of this happening. It's not a problem I think we'll ever solve.
In principle, if you make your standard extensible enough, people should stop sneaking data into comments or strings.
... What makes the GP's problem so much more amusing. XML was the last place I'd expect to see it.
That's assuming they know how to use it properly.
Rest has this same issue.
I've seen this when trying to integrate with 3rd party apis.
Status Code 200 Body: Sorry bro, no data.
Even then, this is subject to debate. Should a 404 only be used when the endpoint doesn't exist ? When we have no data to return, etc.
i think that default REST is a bit problematic as it conflates transport protocol level errors with application logic ones.
At least unless you use application/problem+json or application/problem+xml MIME types but those are still just in draft stage
https://datatracker.ietf.org/doc/html/rfc9457#name-the-probl...
For some side projects, I would like use XML, but cannot read the spec, as that costs money.
So I cannot trust XML in depth, and depend on using a library that bought the spec and hopefully adheres to it.
I also love CSV for its simplicity. A key part of that love is that it comes from the perspective of me as a programmer.
Many of the criticisms of CSV I'm reading here boil down to something like: CSV has no authoritative standard, and everyone implements it differently, which makes it bad as a data interchange format.
I agree with those criticisms when I imagine them from the perspective of a user who is not also a programmer. If this user exports a CSV from one program, and then tries to load the CSV into a different program, but it fails, then what good is CSV to them?
But from the perspective of a programmer, CSV is great. If a client gives me data to load into some app I'm building for them, then I am very happy when it is in a CSV format, because I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file.
Parsing CSV is quick and fun if you only care about parsing one specific file. And that's the key: It's so quick and fun, that it enables you to just parse anew each time you have to deal with some CSV file. It just doesn't take very long to look at the file, write a row-processing loop, and debug it against the file.
The beauty of CSV isn't that it's easy to write a General CSV Parser that parses every CSV file in the wild, but rather that its easy to write specific CSV parsers on the spot.
Going back to our non-programmer user's problem, and revisiting it as a programmer, the situation is now different. If I, a programmer, export a CSV file from one program, and it fails to import into some other program, then as long as I have an example of the CSV format the importing program wants, I can quickly write a translator program to convert between the formats.
There's something so appealing about to me about simple-to-parse-by-hand data formats. They are very empowering to a programmer.
Totally agree that its biggest strength is how approachable it is for quick, ad hoc tooling. Need to convert formats? Join two datasets? Normalize a weird export? CSV gives you just enough structure to work with and not so much that it gets in your way.
> I know I can quickly write a parser, not by reading some spec, but by looking at the actual CSV file
This is fine if you can hand-check all the data, or if you are okay if two offsetting errors happen to corrupt a portion of the data without affecting all of it.
Also I find it odd that you call it "easy" to write custom code to parse CSV files and translate between CSV formats. If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy." When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.
Like, I get that a farmer a couple hundred years ago would describe plowing a field with a horse as "easy," but given the emergence of alternatives, you wouldn't use the word in that context anymore.
> If somebody give you a JSON file that isn't valid JSON, you tell them it isn't valid, and they say "oh, sorry" and give you a new one. That's the standard for "easy."
But it isn't that reliably easy with JSON. Sometimes I have clients give me data that I just have to work with, as-is. Maybe it was invalid JSON spat out by some programmer or tool long ago. Maybe it's just from a different department than my contact, which might delay things for days before the bureaucracy gets me a (hopefully) valid JSON.
I consider CSV's level of "easy" more reliable.
And even valid JSON can be less easy. I've had experiences where writing the high-level parsing for some JSON file, in terms of a JSON library, was less easy and more time-consuming than writing a custom CSV parser.
Subjectively, I think programming a CSV parser from basic programming primitives is just more fun and appealing than programming in terms of a JSON library or XML library. And I find the CSV code is often simpler and quicker to write.
> When there are many and diverse data formats that meet that standard, it seems perverse to use the word "easy" to talk about empirically discovering the quirks in various undocumented dialects and writing custom logic to accommodate them.
But the premise of CSV is so simple, that there are only four quirks to empirically discover: cell delimiter, row delimiter, quote, escaped-quote.
I think it's "easy" to peek at the file and say, "Oh, they use semicolon cell delimiters."
And it's likewise "easy" to write the "custom logic", which is about as simple as parsing something directly from a text stream gets. I typically have to stop and think a minute about the quoting, but it's not that bad.
If a programmer is practiced at parsing from a text stream (a powerful, general skill that is worth exercising), than I think it is reasonable to think they might find parsing CSV by hand to be easier and quicker than parsing JSON (etc.) with a library.
I like CSV for the same reasons I like INI files. It's simple, text based, and there's no typing encoded in the format, it's just strings. You don't need a library.
They're not without their drawbacks, like no official standards etc, but they do their job well.
I will be bookmarking this like I have the ini critique of toml: https://github.com/madmurphy/libconfini/wiki/An-INI-critique...
I think the first line of the toml critique applies to CSV: it's a federation of dialects.
Similarly I had once loved the schemaless datastorages. They are so much simpler!
Until I worked quite a bit with them and realized that there's always schema in the data, otherwise it's just random noise. The question is who maintains the schema, you or a dbms.
Re. formats -- the usefulness comes from features (like format enforcing). E.g. you may skip .ini at all and just go with lines on text files, but somewhere you still need to convert those lines to your data, there's no way around it, the question is who's going to do that (and report sane error messages).
> It's simple
My experience has indicated the exact opposite. CSVs are the only "structured" format nobody can claim to parse 100% (ok probably not true thinking about html etc, just take this as hyperbole.) Just use a well-specified format and save your brain-cells.
Occasionally, we must work with people who can only export to csv. This does not imply csv is a reasonable way to represent data compared to other options.
CSV works because CSV is understood by non technical people who have to deal with some amount of technicality. CSV is the friendship bridge that prevents technical and non technical people from going to war.
I can tell an MBA guy to upload a CSV file and i'll take care of it. Imagine i tell him i need everything in a PARQUET file!!! I'm no longer a team player.
Indeed the my main use is most financial services will output your records in csv, although I mostly open that in excel which sometimes gets a bit confused.
This is incorrect. Everyone uses Excel, not CSV. There are billions of people on this planet who know what to do with an .xlsx file.
Do the same with a .CSV file and you'll have to teach those people how to use the .CSV importer in Excel and also how to set up the data types for each column etc. It's a non trivial problem that forces you down to a few million people.
.CSV is a niche format for inexperienced software developers.
"Friendship bridge" is the perfect phrase
This is so relatable to all data eng people from SWE background!
Thanks
Among the shit I have seen in CSV, no " for strings, including those with a return char, innovative SEP, date, numbers, no escape for " within strings, rows related to the reporting tools used to export to CSV etc
I wish this was a joke. I'm always trying to convince data scientists with a foot in the open source world that their life will be so much better if they use parquet or Stata or Excel or any other kind of file but CSV.
On top of all the problems people mention here involving the precise definition of the format and quoting, it's outright shocking how long it takes to parse ASCII numbers into floating point. One thing that stuck with me from grad school was that you could do a huge number of FLOPS on a matrix in the time it would take to serialize and deserialize it to aSCII.
What advantages does excel give you over CSV?
What isn't fun about CSV is quickly written parsers and serializers repeatedly making the common mistake of not handling, or badly handling, quoting.
For a long time I was very wary of CSV until I learnt Python and started using it's excellent csv standard library module.
Thats true, in recent years its been less of a disaster with lots of good csv libraries for various languages. In the 90s csv was a constant footgun, perhaps thats why they went crazy and came up with XML
Even widely used libraries that you might expect get it right, don't. (Like Spark, which uses Java style backslash escaping)
Why not Pandas, since you're working with tabular data anyway?
I've recently been developing a raspberry pi based solution which works with telemetry logs. First implementation used an SQLite database (with WAL log) – only to find it corrupted after just couple of days of extensive power on/off cycles.
I've since started looking at parquet files – which turned out to not be friendly to append-only operations. I've ended up implementing writing events into ipc files which then periodically get "flushed" into the parquet files. It works and it's efficient – but man is it non-trivial to implement properly!
My point here is: for a regular developer – CSV (or jsonl) is still the king.
> First implementation used an SQLite database (with WAL log) – only to find it corrupted after just couple of days of extensive power on/off cycles.
Did you try setting `PRAGMA synchronous=FULL` on your connection? This forces fsync() after writes.
That should be all that's required if you're using an NVMe SSD.
But I believe most microSD cards do not even respect fsync() calls properly and so there's technically no way to handle power offs safely, regardless of what software you use.
I use SanDisk High Endurance SD cards because I believe (but have not fully tested) that they handle fsync() properly. But I think you have to buy "industrial" SD cards to get real power fail protection.
There's definitely a place for it. I ran into the same problem with a battery powered event logger. Basically alternate between sleep-until-event and sample-until-event-over.
SQLite was fine until the realities of that environment hit.
0) I need to save the most data over time and my power budget is unpredictable due to environmentals. 1) When should I commit? SQLite commit per insert slows down, impacts battery life, impacts sample rate. Practically you could get away with batching all data for a small period. 2) SQLite is slow to repair databases. Partially written file would often take longer to repair than we had battery to run.
CSV based format filled that niche. First column was line-column count to support firmware upgrades. Last column is line-checksum. Another column indicating if this line was the last for an event. Parser skips corrupted lines/entries.
If sqlite ends up corrupted, why wouldn't a CSV? What happens if the system dies partway through a write?
> I've since started looking at parquet files – which turned out to not be friendly to append-only operations. I've ended up implementing writing events into ipc files which then periodically get "flushed" into the parquet files. It works and it's efficient – but man is it non-trivial to implement properly!
I think the industry standard for supporting this is something like iceberg or delta, it's not very lightweight, but if you're doing anything non-trivial, it's the next logical move.
The argument against JSON isn't very compelling. Adding a name to every field as they do in their strawman example isn't necessary.
Compare this CSV
To the directly-equivalent JSON The JSON version is only marginally bigger (just a few brackets), but those brackets represent the ability to be either simple or complex. This matters because you wind up with terrible ad-hoc nesting in CSV ranging from entries using query string syntax to some entirely custom arrangement. And in these cases, JSON's objects are WAY better.Because CSV is so simple, it's common for them to avoid using a parsing/encoding library. Over the years, I've run into this particular kind of issue a bunch.
JSON parsers will not only output the expected values every time, but your language likely uses one of the super-efficient SIMD-based parsers under the surface (probably faster than what you are doing with your custom CSV parser).Another point is standardization. Does that .csv file use commas, spaces, semicolons, pipes, etc? Does it use CR,LF, or CRLF? Does it allow escaping quotations? Does it allow quotations to escape commas? Is it utf-8, UCS-2, or something different? JSON doesn't have these issues because these are all laid out in the spec.
JSON is typed. Sure, it's not a LOT of types, but 6 types is better than none.
While JSON isn't perfect (I'd love to see an official updated spec with some additional features), it's generally better than CSV in my experience.
> the directly-equivalent JSON
I think it's a big stretch to use that JSON for comparison. In practice, one is much more likely to see this:> Because CSV is so simple, it's common for them to avoid using a parsing/encoding library.
A but unfair to compare CSV without parser library to JSON with library.
The flexibility of JSON is a downside when you just want to stream large volumes of row-oriented tabular data
> JSON parsers will not only output the expected values every time
Unless you need appendability, but then you should probably just use NDJSON/JSONL for a lot of cases.
They're completely skipping over the complications of header rows and front matter.
"8. Reverse CSV is still valid CSV" is not true if there are header rows for instance.
But really, whether or not CSV is a good format or not comes down to how much control you have over the input you'll be reading. If you have to deal with random CSV from "in the wild", it's pretty rough. If you have some sort of supplier agreement with someone that's providing the data, or you're always parsing data from the same source, it's pretty fine.
I am annoyed that comma won out as the separator. Tab would have been a massively better choice. Especially for those of us who have discovered and embraced elastic tabstops. Any slightly large CSV is unreadable and uneditable because you can't easily see where the commas are, but with tabs and elastic tabstops, the whole thing is displayed as a nice table.
(That is, of course, assuming the file doesn't contain newlines or other tabs inside of fields. The format should use \t \n etc for those. What a missed opportunity.)
I wrote a web scraper for some county government data and went for tabs as well. It's nice how the columns lined up in my editor (some of these files had hundreds of thousands of lines).
We have dedicated field separator characters :-/
And all kinds of other weirdness, right in ascii. Vertical tabs, LOL. Put those in filenames on someone else's computer if you want to fuck with them. Linux and its common file systems are terrifyingly permissive in the character set they allow for file names.
Nobody uses any of that stuff, though.
CSV have multiple different separators. Eg. Excel defaults to different separators based on locale. Like CZ locale, it uses commas in numbers instead of dot, so CSV uses semicolon as default separator.
At various points in my career, I've had to oversee people creating data export features for research-focused apps. Eventually, I instituted a very simple rule:
As part of code review, the developer of the feature must be able to roundtrip export -> import a realistic test dataset using the same program and workflow that they expect a consumer of the data to use. They have up to one business day to accomplish this task, and are allowed to ask an end user for help. If they don't meet that goal, the PR is sent back to the developer.
What's fascinating about the exercise is that I've bounced as many "clever" hand-rolled CSV exporters (due to edge cases) as other more advanced file formats (due to total incompatibility with every COTS consuming program). All without having to say a word of judgment.
Data export is often a task anchored by humans at one end. Sometimes those humans can work with a better alternative, and it's always worth asking!
As someone who likes modern formats like parquet, when in doubt, I end up using CSV or JSONL (newline-delimited JSON). Mainly because they are plain-text (fast to find things with just `grep`) and can be streamed.
Most features listed in the document are also shared by JSONL, which is my favourite format. It compresses really well with gzip or zstd. Compression removes some plain-text advantages, but ripgrep can search compressed files too. Otherwise, you can:
Another advantage of JSONL is that it's easier to chunk into smaller files.Too bad xz/lzma isn't supported in these formats. I often get pretty big improvements in compression ratio. It's slower, but it can be parallelized too.
You can cat Parquet and other formats into grep just as easily.
I switched to JSONL over a decade ago and I would recommend everyone else to also have switched then.
This whole thread is an uninformed rehash of bad ideas.
Essential CSV shell tools:
csvtk: https://bioinf.shenwei.me/csvtk/
gawk: https://www.gnu.org/software/gawk/manual/html_node/Comma-Sep...
awk: https://github.com/onetrueawk/awk?tab=readme-ov-file#csv
Also VisiData is an excellent TUI spreadsheet.
Forgive me for promoting this that I wrote:
csvquote: https://github.com/dbro/csvquote
Especially for use with existing shell text processing tools, eg. cut, sort, wc, etc.
I would add xan to this list: https://github.com/medialab/xan
But of course, I am partial ;)
aaand xsv : https://github.com/BurntSushi/xsv
CSV still quietly powers the majority of the world’s "data plumbing."
At any medium+ sized company, you’ll find huge amounts of CSVs being passed around, either stitched into ETL pipelines or sent manually between teams/departments.
It’s just so damn adaptable and easy to understand.
> It's just so damn adaptable
Like a rapidly mutating virus, yes.
> and easy to understand.
Gotta disagree there.
For example, one of the CSVs my company shovels around is our Azure billing data. There are several columns that I just have absolutely no idea what the data in them is. There are several columns we discovered are essentially nullable¹ The Hard Way when we got a bill for which, e.g., included a charge that I guess Azure doesn't know what day that charge occurred on? (Or almost anything else about it.)
(If this format is documented anywhere, well, I haven't found the docs.)
Values like "1/1/25" in a "date" column. I mean, I did say it was an Azure-generated CSV, so obviously the bar wasn't exactly high, but then it never is, because anyone wanting to build something with some modicum of reliability, or discoverability, is sending data in some higher-level format, like JSON or Protobuf or almost literally anything but CSV.
If I can never see the format "JSON-in-CSV-(but-we-fucked-up-the-CSV)" ever again, that would spark joy.
(¹after parsing, as CSV obviously lacks "null"; usually, "" is a serialized null.)
Insurance. One of the core pillars of insurance tech is the CSV format. You'll never escape it.
I've recently written a library at work to run visitors on data models bound to data sets. One of these visitors is a CSV serializer that dumps a collection as a CSV document.
I've just checked and strings are escaped using the same mechanism for JSON, with backslashes. I should've double-checked against RFC 4180, but thankfully that mechanism isn't currently triggered anywhere for CSV (it's used for log exportation and no data for these triggers that code path). I've also checked the code from other teams and it's just handwritten C++ stream statements inside a loop that doesn't even try to escape data. It also happens to be fine for the same reason (log exportation).
I've also written serializers for JSON, BSON and YAML and they actually output spec-compliant documents, because there's only one spec to pay attention to. CSV isn't a specification, it's a bunch of loosely-related formats that look similar at a glance. There's a reason why fleshed-out CSV parsers usually have a ton of knobs to deal with all the dialects out there (and I've almost added my own by accident), that's simply not a thing for properly specified file formats.
The joy of CSV is everyone knows roughly what you mean and the details can communicated succintly.
The python3 csv module basically does the job.
I work as a data engineer in the financial services industry, and I am still amazed that CSV remains the preferred delivery format for many of our customers. We're talking datasets that cost hundreds of thousands of dollar to subscribe to.
"You have a REST API? Parquet format available? Delivery via S3? Databricks, you say? No thanks, please send us daily files in zipped CSV format on FTP."
> REST API
Requires a programmer
> Parquet format
Requires a data engineer
> S3
Requires AWS credentials (api access token and secret key? iam user console login? sso?), AWS SDK, manual text file configuration, custom tooling, etc. I guess with Cyberduck it's easier, but still...
> Databricks
I've never used it but I'm gonna say it's just as proprietary as AWS/S3 but worse.
Anybody with Windows XP can download, extract, and view a zipped CSV file over FTP, with just what comes with Windows. It's familiar, user-friendly, simple to use, portable to any system, compatible with any program. As an almost-normal human being, this is what I want out of computers. Yes the data you have is valuable; why does that mean it should be a pain in the ass?
Yes because users can read the data themselves and don't need a programmer.
Financial users live in Excel. If you stick to one locale (unfortunately it will have to be US) then you are OKish.
This is particularly funny because I just received a ticket saying that the CSV import in our product doesn't work. I asked for the CSV, and it uses a semicolon as a delimiter. That's just what their Excel produced, apparently. I'm taking their word for it because... Excel.
To me, CSV is one of the best examples of why Postel's Law is scary. Being a liberal recipient means your work never ends because senders will always find fun new ideas for interpreting the format creatively and keeping you on your toes.
Of course, because there are locales which uses comma as decimal separator. So CSV in Excel then defaults to semicolon.
Another Microsoft BS, they should defaults to ENG locale in CSV, do a translation in background. And let user choose, if they want to save as different separator. Excel in every part of world should produce same CSV by default. Bunch of idiots.
There is a lot not to like about CSV, for all the reasons given here. The only real positive is that you can easily create, read and edit CSV in an editor.
Personally I think we missed a trick by not using the ASCII US and RS characters:
Columns separated by \u001F (ASCII unit separator).
Rows separated by \u001E (ASCII record separator).
No escaping needed.
More about this at:
https://successfulsoftware.net/2022/04/30/why-isnt-there-a-d...
Good idea, but probably a non-starter due to no keyboard keys for those characters. Even | would've been a better character to use since it almost never appears in common data.
Welp, now I know my weekend project.
In abstract CSV is great. In reality, it is a nightmare not because of CSV, but because of all the legacy tools that product it in slightly different ways (different character encodings mostly - excel still produces some variant of latin1, some tools drop a BOM in your UTF8, etc).
Unless you control the producer of the data you are stuck trying to infer the character encoding and transcoding to your destination, and there's no foolproof way of doing that.
I greatly prefer TSV over CSV. https://en.wikipedia.org/wiki/Tab-separated_values
100% agree. TSV is under-rated. Tabs don't naturally occur in data nearly as often as commas so tabs are a great delimiter. Copy paste into Excel also works much better with tabs.
Code editors may convert tabs to spaces but are you really editing and saving TSV data files in your code editor?
TSV's big advantage is that, as commonly implemented, the separators are escaped, not quoted. This means that a literal newline (ASCII 0x0A) is always a record separator and a literal tab (ASCII 0x09) is always a field separator. This is the format many databases -- including Postgres -- use for text export by default.
There are some notes I put together about TSV a few years ago that expand on these points: https://github.com/solidsnack/tsv?tab=readme-ov-file#motivat...
The problem with using TSV is different user configuration. For ex. if I use vim then Tab might indeed be a '\t' character but in TextEdit on Mac it might be something different, so editing the file in different programs can yield different formatting. While ',' is a universal char present on all keyboards and formatted in a single way
CSV is a pseudo-standard anyway, IMO the delimiter should be a configurable option (like it is in Unix cut and those kinds of tools).
Functionally the same. I'd prefer CSV if my content was likely to have whitespace in it and didn't want billion quotes. I'd prefer TSV if my content was unlikely to have whitespace, and more likely to contain commas.
The problem with TSV is what are you going to do about quotes. Some fields might contain them [1] or they might be needed to store fields with tabs inside them.
Because of this in order to read a plain simple TSV (fields separated by tabs, nothing more) with the Python csv module [2] you need to set the quote character to an improbable value, say € (using the euro sign because HN won't let me use U+1F40D), or just parse it by hand, e.g. row.split('\t').
[1]: https://github.com/wireservice/csvkit/issues/1194
[2]: https://docs.python.org/3/library/csv.html
Agreed, much easier to work with, especially if you can guarantee no embedded tabs or newlines. Otherwise you end up with backslash escaping, but that's still usually easier than quotes.
TSV looks incredibly ugly when opened in a text editor unless all values are 7 characters or less.
Thanks for your input.
How much easier would all of this be if whoever did CSV first had done the equivalent of "man ascii". There are all these wonderful codes there like FS, GS, RS, US that could have avoided all the hassle that quoting has brought generations of programmers and data users.
I think for "untyped" files with records, using the ASCII file, (group) and record separators (hex 1C, 1D and 1E) work nicely. The only constraint is that the content cannot contain these characters, but I found that that is generally no problem in practice. Also the file is less human readable with a simple text editor.
For other use cases I would use newline separated JSON. Is has most of the benefits as written in the article, except the uncompressed file size.
I agree that JSONL is the spiritual successor of CSV with most of the benefits and almost none of the drawbacks.
It has a downside though: wherever JSON itself is used, it tends to be a few kilobytes at least (from an API response, for example). If you collect those in a JSONL file the lines tend to get verrrry long and difficult to edit. CSV files are more compact.
JSONL files are a lot easier to work with though. Less headaches.
Honestly yes. If text editors would have supported these codes from the start, we might not even have XML, JSON or similar today. If these codes weren't "binary" and all scary, we would live in much different world.
I wonder how much we have been hindered ourselves by reinventing plain text human-readable formats over the years. CSV -> XML -> JSON -> YAML and that's just the top-level lineage, not counting all the branches everywhere out from these. And the unix folks will be able to name plenty of formats predating all of this.
I'm not really sure why "Excel hates CSV". I import into Excel all the time. I'm sure the functionality could be expanded, but it seems to work fine. The bit of the process I would like improved is nothing to do with CSV - it's that the exporting programs sometimes rearrange the order of fields, and you have to accommodate that in Excel after the import. But since you can have named columns in Excel (make the data in to a table), it's not a big deal.
One problem is that Excel uses locale settings for parsing CSV files (and, to be fair, other text files). So if you're in e.g. Europe and you've configured Excel to use commas as decimal separators, Excel imports numbers with decimals (with points as decimal separator) as text. Or it thinks the point is a thousands separator. I forgot exactly which one of those incorrect options it chooses.
I don't know what they were thinking, using a UI setting for parsing an interchange format.
There's a way around, IIRC, with the "From text / csv" command, but that looses a lot of the convenience of double-clicking a CSV file in Explorer or whatever to open it in Excel.
It used to silently transform data on import. It used to silently drop columns.
That's it, but it's really bad.
Excel is halfway decent if you do the 'import' but not if you just doubleclick on them. It seems to have been programmed to intentionally do stupid stuff with them if you just doubleclick on them.
In the past I remember that Excel not properly handling UTF-8 encoded text in a CSV. It would treat it as raw ASCII (or possibly code page 1252). So if you opened and saved a CSV, it would corrupt any Unicode text in the file. It's possible this has been fixed in newer versions, I haven't tried in a while.
I have repeatedly seen people getting the spreadsheets altered by excel, and in general a lot of troubles due to localisation reasons. Sometimes these changes can be subtle and be hard to spot until somebody tries to troubleshoot what went wrong down the line.
It works better if you click to "import the data" instead of just opening the csv file with it, and if you then choose the right data types. But having to do this everytime to make it work is really annoying, esp when you have a lot of columns, plus people can easily get confused with the data types. I have never seen that much confusion eg with macos's numbers.
https://stackoverflow.com/questions/165042/stop-excel-from-a...
It's an ad hoc text format that is often abused and a last-chance format for interchange. While heuristics can frequently work at determining the structure, they can just as easily frequently fail. This is especially true when dealing with dates and times or other locale-specific formats. Then, people outright abuse it by embedding arrays or other such nonsense.
You can use CSV for interchange, but a duck db import script with the schema should accompany it.
One thing that has changed the game with how I work with CSVs is ClickHouse. It is trivially easy to run a local database, import CSV files into a table, and run blazing-fast queries on it. If you leave the data there, ClickHouse will gradually optimize the compression. It's pretty magical stuff if you work in data science.
I feel the same way about elastic.
That being said I noticed .parquet as an export format option on Shopify recently and an hopeful more providers offer the choice.
Simon W's https://datasette.io/ is also excellent.
CSV on the Web (CSVW) is a W3C standard designed to enable the description of CSV files in a machine-readable way.[1]
"Use the CSV on the Web (CSVW) standard to add metadata to describe the contents and structure of comma-separated values (CSV) data files." — UK Government Digital Service[2][3]
[1] https://www.w3.org/TR/tabular-data-primer/
[2] https://www.gov.uk/government/publications/recommended-open-...
[3] https://csvw.org/
CSV is everywhere. I use manifold-csv[1] it’s amazing.
1. https://github.com/manifold-systems/manifold/tree/master/man...
I wrote my own CSV parser in C++. I wasn't sure what to do in some edge cases, e.g. when character 1 is space and character 2 is a quote. So I tried importing the edge case CSV into both MS Excel and Apple Numbers. They parsed it differently!
All hail TSV. Like CSV, but you're probably less likely to want tabs, than commas.
Also CSV can be queried : https://til.simonwillison.net/sqlite/one-line-csv-operations
CSV is so deceptively simple that people don't care understanding it. I wasted countless hours working around services providing non-escaped data that off the shelf parsers could not parse.
I have written a new database system that will convert CSV, JSON, and XML files into relational tables.
On of the biggest challenges to CSV files is the lack of data types on the header line that could help determine the schema for the table.
For example a file containing customer data might have a column for a Zip Code. Do you make the column type a number or a string? The first thousand rows might have just 5 digit numbers (e.g. 90210) but suddenly get to rows with the expanded format (e.g. 12345-1234) which can't be stored in an integer column.
csv does not stop you from making the first line be column headers, with implied data types, you just have to comma separate them!
Something I support completely - previously https://news.ycombinator.com/item?id=35418933#35438029
If CSV is indeed so horrible - and I do not deny that there can be an improvement - how about the clever data people spec out a format that
Does not require a bizarre C++ RPC struct definition library _both_ to write and to read
Does not invent a clever number encoding scheme that requires native code to decode at any normal speed
Does not use a fancy compression algorithm (or several!) that you need - again - native libraries to decompress
Does not, basically, require you be using C++, Java or Python to be able to do any meaningful work with it
It is not that hard, really - but CSV is better (even though it's terrible) exactly because it does not have all of these clever dependency requirements for clever features piled onto it. I do understand the utility of RLE, number encoding etc. I do not, and will not, understand the utility of Thrift/Avro, zstandard and brotli and whatnot over standard deflate, and custom integer encoding which requires you download half of Apache Commons and libboost to decode. Yes, those help the 5% to 10% of the use cases where massive savings can be realised. It absolutely ruins the experience for the other 90 to 95.
But they also give Parquet and its ilk a very high barrier of entry.
The worst thing about CSV is Excel using localization to choose the delimiter: https://answers.microsoft.com/en-us/msoffice/forum/all/csv-f...
The second worst thing is that the escape character cannot be determined safely from the document itself.
For everyone complaining about CSV/TSV, there's a scripting language called R.
It makes working with CSV/TSV files super simple.
It's as easy this:
# Import tidyverse after installing it with install.packages("tidyverse")
library(tidyverse)
# Import TSV
dataframe_tsv <- read_tsv("data/FileFullOfDataToBeRead.tsv")
# Import CSV
dataframe_csv <- read_csv("data/FileFullOfDataToBeRead.csv")
# Mangle your data with dplyr, regular expressions, search and replace, drop NA's, you name it.
<code to sanitize all your data>
Multiple libraries exist for R to move data around, change the names of entire columns, change values in every single row with regular expressions, drop any values that have no assigned value, it's the swiss army knife of data. There are also all sorts of things you can do with data in R, from mapping with GPS coordinates to complex scientific graphing with ggplot2 and others.
Here's an example for reading iButton temperature sensor data: https://github.com/hominidae/ibutton_tempsensors/
Notice that in the code you can do the following to skip leading lines by passing it as an argument: skip = 18
cf1h <- read_csv("data/Coldframe_01_High.csv", skip = 18)
Funny how the "specification holds in a tweet" yet manages to miss at least three things: 1) character encoding, 2) BOM or not, 3) header or no header.
Always UTF-8. Never a BOM. Always a header
I've found some use cases where CSV can be a good alternative to arrays for storage, search and retrieval. Storing and searching nested arrays in document databases tends to be complicated and require special queries (sometimes you don't want to create a separate collection/table when the arrays are short and 1D). Validating arrays is actually quite complicated; you have to impose limits not only on the number of elements in the array, but also on the type and size of elements within the array. Then it adds a ton of complexity if you need to pass around data because, at the end of the day, the transport protocol is either string or binary; so you need some way to indicate that something is an array if you serialize it to a string (hence why JSON exists).
Reminds me of how I built a simple query language which does not require quotation marks around strings, this means that you don't need to escape strings in user input anymore and it prevents a whole bunch of security vulnerabilities such as query injections. The only cost was to demand that each token in the query language be separated by a single space. Because if I type 2 spaces after an operator, then the second one will be treated as part of the string; meaning that the string begins with a space. If I see a quotation mark, it's just a normal quotation mark character which is part of the string; no need to escape. If you constrain user input based on its token position within a rigid query structure, you don't need special escape characters. It's amazing how much security has been sacrificed just to have programming languages which collapse space characters between tokens...
It's kind of crazy that we decided that quotation marks are OK to use as special characters within strings, but commas are totally out of bounds... That said, I think Tab Separated Values TSV are even more broadly applicable.
the simplicity is underappreciated because people don't realise how many dumb data engineers there are. i'm pretty sure most of them can't unpack an xml or json. people see a csv and think they can probably do it themselves, any other data format they think 'gee better buy some software with the integration for this'.
Excel hates CSV only if you don't use the "From text / csv" function (under the data tab).
For whatever reason, it flawlessly manages to import most CSV data using that functionality. It is the only way I can reliably import data to excel with datestamps / formats.
Just drag/dropping a CSV file onto a spreadsheet, or "open with excel" sucks.
Even "From Text / CSV" sucks:
It inserts an extra row at the top for its pivot table, with entries "Column1, Column2, ...".
So if you export to CSV again, you now have 2 header rows.
So Excel can't roundtrip CSVs, and the more often you roundtrip, the more header rows you get.
You need to remember to manually delete the added header row each time, otherwise software you export back to can't read it.
Yea they seem to have added this about a year ago and it works pretty well, to be fair.
Now if they would just also allow pasting CSV data as “source” it would be great.
If this was really a love letter, it would have been in CSV format.
"CSV" should die. The linked article makes critical ommisions and is wrong about some points. Goes to show just how awful "CSV" is.
For one thing, it talks about needing only to quote commas and newlines... qotes are usually fine... until they are on either side of the value. then you NEED to quote them as well.
Then there is the question about what exactly "text" is; with all the complications around Unicode, BOM markers, and LTR/RTL text.
CSV is bad. Furthermore it’s unnecessary. ASCII has field and record separator characters that were for this purpose.
That would be great if keyboards had keys for those characters and there was a common way to display them on a screen, but they don't and there isn't.
> 4. CSV is streamable
This is what keeps me coming back.
…ndjson is streamable, too…
Just don't write that love letter in French ... or any language that uses comma for decimals
The Italian version of Excel uses a custom CSV style with ; as a column separator. This breaks many applications that accept CSVs. It's super annoying.
You should look at its author's nationality ;)
The library quotes values like that in quotes
>This is so simple you might even invent it yourself without knowing it already exists while learning how to program.
This is a double-edged sword. The "you might even event it yourself" simplicity means that in practice lots of different people do end up just inventing their own version rather than standardizing to RFC-4180 or whatever when it comes to "quote values containing commas", values containing quotes, values containing newlines, etc. And the simplicity means these type of non-standard implementations can go completely undetectable until a problematic value happens to be used. Sometimes added complexity that forces paying more attention to standards and quickly surfaces a diversion from those standards is helpful.
Just last week I was bitten by a customer’s CSV that failed due to Windows‘ invisible BOM character that sometimes occurs at the beginning of unicode text files. The first column‘s title is not „First Title“ then but „&zwnbsp;First Title“. Imagine how long it takes before you catch that invisible character.
Aside from that: Yes, if CSV would be a intentional, defined format, most of us would do something different here and there. But it is not, it is more of a convention that came upon us. CSV „happened“, so to say. No need to defend it more passionate than the fact that we walk on two legs. It could have been much worse and it has surprising advantages against other things that were well thought out before we did it.
I wish the UTF8BOM was standardized. Encoding guessing usually works until it doesn't.
CSV is the bane of my existence. There is no reason to use it outside of legacy use-cases, when so many alternatives are not so brittle that they require endless defensive hacks to avoid erring as soon as exposed to the universe. CSV must die.
CVS isn't brittle, and I'm not sure what "hacks" you're referring to. If you or your parser just follow RFC4180 (particularly quote every field, and double quoting to cancel-quote), that will get you 90%+ compatibility.
CSV is awesome for front-end webapps needing to fetch A LOT of data from a server in order to display an information-dense rendering. For that use-case, one controls both sides so the usual serialization issues aren't a problem.
TSV > CSV
Way easier to parse
pipe (|) separated or gtfo!
sqlite3 gets it right.
There was/is CSVY [0] which attempted to put column style and separator information in a standard header. It is supported by R lang.
I also asked W3C on theirGithub if there was any spec for CSV headers and they said there isn't [1]. Kind of defeats the point of the spec in my opinion.
0: https://github.com/leeper/csvy
1: https://github.com/w3c/csvw/issues/873
"the controversial ex-post RFC 4180"
I looked at the RFC. What is controversial about it?
Look at how it handles escaping of special characters and particularly new lines (RFC 4180 doesn’t guarantee that a new line is a new record) and how it’s written in 2005 yet still doesn’t handle unicode other than via a comment about ”other character sets”.
You mean aside from the fact it's ex-post ...
The fact that you can parse CSV in reverse is quite cool, but you can't necessarily use it for crash recovery (as suggested) because you can't be sure that the last thing written was a complete record.
Last field rather than last record. The first row will give you column count.
Working in clinical trial data processing I receive data in 1 of 3 formats: csv, sas datasets, image scans of pdf pages showing spreadsheets
Of these 3 options sas datasets are my preference but I'll immediately convert to csv or excel, csv is a close 2nd once you confirm the quoting / seperator conventions it's very easy to parse. I understand why someone may find the csv format disagreeable but in my experience the alternatives can be so much worse I don't worry too much about csv files
I love CSV for a number of reasons. Not the least of which it’s super easy to write a program (code) in C to directly output all kinds of things to CSV. You can also write simple middleware to go from just about any database or just general “thing” to CSV. Very easily. Then toss CSV into excel and do literally anything you want.
It’s sort of like, the computing dream when I was growing up.
+1 to ini files. I like you can mess around with them yourself in notepad. Wish there was a general outline / structure to those though.
Items #6 to #9 sound like genuine trolling to me; item #8, reversing bytes because of course no other text encodings than ASCII exist, is particularly horrible.
the reversing bytes part is encoding agnostic. you just feed the reversed bytes to the csv parser then re-reverse both the yielded rows and the cells bytes and get the original order of the bytes themselves.
I have been just splitting my head to parse data from from a erp database to csv and then from csv to erp database again using the programming language user by erp system.
The first part of converting data to csv works fine with help of ai coding assistant.
The reverse part of csv to database is getting challenging and even claude sonnet 3.7 is not able to escape newline correctly.
I am now implementation the data format in json which is much simpler.
Relevant discussion from a few years back
https://news.ycombinator.com/item?id=28221654
I think I understand the point being made, but all this reliance on text-based data means we require proper agreement on text encodings, etc. I don't think it's very useful for number-based data anyway, it's a massively bloated way to store float32s for instance and usually developers truncate the data losing about half of the precision in the process.
For numerical data, nothing beats packing floats into blobs.
I think binary formats have many advantages. Not only for numbers but other data as well, including data that contains text (to avoid needing escaping, etc; and to declare what character sets are being used if that is necessary), and other structures. (For some of my stuff I use a variant of DER, which adds a few new types such as key/value list type.)
I can‘t really understand the love for the format. Yes it’s simple but also not defined in a common spec. Same story with markdown. Yes GitHub tried to push for a spec but it still feels more like a flavor. I mean there is nothing wrong with not having a spec. But certain guarantees are not given. Will the document exported by X work with Y.
I love CSV when it's only me creating/using the CSV. It's a very useful spreadsheet/table interchange format.
But god help you if you have to accept CSVs from random people/places, or there's even minor corruption. Now you need an ELT pipeline and manual fix-ups. A real standard is way better for working with disparate groups.
Quick question while we’re on the topic of CSV files: is there a command-line tool you’d recommend for handling CSV files that are malformed, corrupted, or use unexpected encodings?
My experience with CSVs is mostly limited to personal projects, and I generally find the format very convenient. That said, I occasionally (about once a year) run into issues that are tricky to resolve.
CSV has caused me a lot of problems due to the weak type system. If I save a Dataframe to CSV and reload it, there is no guarantee that I'll end up with an identical dataframe.
I can depend on parquet. The only real disadvantages with parquet are that they aren't human-readable or mutable, but I can live with that since I can easily load and resave them.
I'm in on the "shit on microsoft for hard to use formats train" but as someone who did a LOT of .docx parsing - it turned into zen when I realized that I can just convert my docs into the easily parsed .html5 using something like pandoc.
This is a good blog post and Xan is a really neat terminal tool.
Xan looks great. Miller is another great cli tool for transforming data among csv, tsv, json and other formats.
https://miller.readthedocs/
Excel won't import ISO 8601 timestamps either, which is crazy these days where it's the universal standard, and there's no excuse to use anything else.
You have to replace the "T" separator with a space and also any trailing "Z" UTC suffix (and I think any other timezone/offset as well?) for Excel to be able to parse as a time/date.
I don't get it - why the world, Excel can't just open the CSV, assume from the extension it's COMMA separated value and do the rest. It does work slightly better when importing, just a little.
This. I got burnt by the encoding and other issues with CSV in Excel back in the day, I've only used LibreOffice Calc (on Linux) for viewing / editing CSVs for many years now, it's almost always a trouble-free experience. Fortunately I don't deal much with CSVs that Excel-wielding non-devs also need to open these days - I assume that, for most folks, that's the source of most of their CSV woes.
I just wish Excel was a little less bad about copy-pasting CSVs as well. Every single time, without fail, it dumps them into a single column. Every single time I use "text to columns" it has insane defaults where it's fixed-width instead of delimited by, you know, commas. So I change that and finally it's fixed.
Then I go do it somewhere else and have to set it up all over again. Drives me nuts. How the default behavior isn't to just put them in the way you'd expect is mind-boggling.
Search and replace + text to columns after the fact works fine.
> CSV is dynamically typed
No, CSV is dependently typed. Way cooler ;)
I wrote something about this https://github.com/Ericson2314/baccumulation/blob/main/datab...
10. CSV doesn't need commas!
Use a different separator if you need to.
CSV is the Vim of formats. If you get a CSV from 1970 you can still load it.
I have to agree.
It was pretty straightforward (although tedious) to write custom CSV data exports in embedded C, with ZERO dependencies.
I know, I know, only old boomers care about removing pip from their code dev process, but, I'm an old boomer, so it was a great feature for me.
Straight out of libc I was able to dump data in real-time, that everyone on the latest malware OSes was able to import and analyze.
CSV is awesome!
Yeah CSV is easy to export, because its not really a file format, but more an idea. I'm not even sure there is such a thing as "invalid" CSV
The following are all valid CSV, and they should all mean the same thing, depending on your point of view:
1) foo, bar, foobar
2) "foo", "bar", "foobar"
3) "foo", bar, foobar
4) foo; bar; "foobar"
5) foo<TAB>bar<TAB>"foobar"
5) foo<EOL> bar<EOL foobar<EOL>
Have fun writing that parser!
I hate CSV (but not as much as XML).
Most reasonably large CSV files will have issues parsing on another system.
It makes me a bit worried to read this thread, I would've thought its pretty common knowledge why CSV is horrible and widely agreed upon. I also have hard time taking anybody seriously who uses "specification" and "CSV" in the same sentence unironically.
I suspect its 1) people who worked with legacy systems AND LIKED IT, or 2) people who never worked with legacy systems before and need to rediscover painful old lessons for themselves.
It feels like trying to convince someone, why its a bad idea to store the year as a CHAR(2) in 1999, unsuccessfully.
This might be the most passionate and well-argued defense of CSV I've read
So easy to get data in and out of an application, opens seamlessly in Excel or your favor DB for further inspection. The only issue is the comma rather than a less used separator like | that occasionally causes issues.
Any recommendations for CSV editors on OSX? I was just looking around for this today. The "Numbers" app is pretty awful and I couldn't find any superb substitutes, only ones that were just OK.
I’ve been using Easy CSV Editor. I especially like getting the min/max, unique, etc values in a given column.
CSV?
it's surely simple but..
- where is meta data? - how do you encode binary? - how do you index? - where are relationships? different files?
I've said it before, CSV will still be used in 200 years. It's ugly, but it occupies an optimal niche between human readability, parsing simplicity, and universality.
With a nice ASIC parser to replace databases.
JSON, XML, YAML are tree describing languages while CSV defines a single table. This is why CSV still works for a lot of things (sure there is JSON lines format of course).
I just wish line breaks weren't allowed to be quoted. I would have preferred \n. Now I can't read line-by-line or stream line-by-line.
Using ascii 'US' Unit Separator and 'RS' Record Separator characters would be a far better implementation of a CSV file.
and of course you can do that if you wish, as many CSV libraries allow arbitrary separators and escapes (though they usually default to the "excel compatible" format)
but at least in my case, I would not like to use those characters because they are cumbersome to work with in a text editor. I like very much to be able to type out CSV columns and rows quickly, when I need to.
I always feel like CSV gets a bad rap. It definitely has problems if you get into corner cases but for many situations it's just fine.
People that talk about readability: if you store using jsonl (one json per line) - you can get your csv by using the terminal command jq.
I wish CSV could have headers for meta data. And schemas would be awesome. And pipes as well to avoid the commas in strings problem.
CSV is too new and without a good standard quoting strategy. Better staying with the boring old fixed length column format :))
> Excel hates CSV
Does it though? Seems to be importing from and exporting to CSV just fine? Elaborate maybe.
9. Excel hates CSV: It clearly means CSV must be doing something right. <3<3<3
with quick and dirty bash stuff ive written the same csv parser so many times it lives in my head and i can write it from memory. no other format is like that. trying to parse json without jq or a library is much more difficult
I prefer Tab-Separated. Its problem though: No tabs allowed in your data.
I used to prefer csv. Then I started using parquet.
Never want to use sas7bdat again.
CSV isn't dynamically typed. Everything is just a string.
shout out to BurntSushis excellent xsv util
https://github.com/BurntSushi/xsv
Kudos for writing this, it's always worth flagging up the utility of a format that just is what it is, for the benefit of all. Commas can also create fun ambiguity, as that last sentence demonstrates. :P
CSV is lovely. It isn't trying to be cool or legendary. It works for the reasons the author proposes, but isn't trying to go further.
I work in a work of VERY low power devices and CSV sometimes is all you need for a good time.
If it doesn't need to be complicated, it shouldn't be. There are always times when I think to myself CSV fits and that is what makes it a legend. Are those times when I want to parallelise or deal with gigs of data in one sitting. Nope. There are more complex formats for that. CSV has a place in my heart too.
Thanks for reminding me of the beauty of this legendary format... :)
Because if there is anything we love in data exchange formats its ambiguity.
CSV is the PHP of fileformats
I'll repeat what I say every time I talk about CSV: I have never encountered a customer who insisted on integrating via CSV who was capable of producing valid CSV. Anybody who can reliably produce valid CSV will send you something else if you ask for it.
> CSV is not a binary format, can be opened with any text editor and does not require any specialized program to be read. This means, by extension, that it can both be read and edited by humans directly, somehow.
This is why you should run screaming when someone says they have to integrate via CSV. It's because they want to do this.
Nobody is "pretending CSV is dead." It'll never die, because some people insist on sending hand-edited, unvalidated data files to your system and not checking for the outcome until mid-morning the next day when they notice that the text selling their product is garbled. Then they will frantically demand that you fix it in the middle of the day, and they will demand that your system be "smarter" about processing their syntactically invalid files.
Seriously. I've worked on systems that took CSV files. I inherited a system in which close to twenty "enhancement requests" had been accepted, implemented, and deployed to production that were requests to ignore and fix up different syntactical errors, because the engineer who owned it was naive enough to take the customer complaints at face value. For one customer, he wrote code that guessed at where to insert a quote to make an invalid line valid. (This turned out to be a popular request, so it was enabled for multiple customers.) For another customer, he added code that ignored quoting on newlines. Seriously, if we encountered a properly quoted newline, we were supposed to ignore the quoting, interpret it as the end of the line, and implicitly append however many commas were required to make the number of fields correct. Since he actually was using a CSV parsing library, he did all of this in code that would pre-process each line, parse the line using the library, look at the error message, attempt to fix up the line, GOTO 10. All of these steps were heavily branched based on the customer id.
The first thing I did when I inherited that work was make it clear to my boss how much time we were spending on CSV parsing bullshit because customers were sending us invalid files and acting like we were responsible, and he started looking at how much revenue we were making from different companies and sending them ultimatums. No surprise, the customers who insisted on sending CSVs were mostly small-time, and the ones who decided to end their contracts rather than get their shit together were the least lucrative of all.
> column-oriented data formats ... are not able to stream files row by row
I'll let this one speak for itself.
I love CSV
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]