The reason that CSV and other typable-delimiters took root where ASCII’s delimiters fell by the wayside is very simple. For data that needed to be human readable or editable, you used CSV (or tab, or carat, or anything else on the standard keyboard). If the data didn’t need to be readily interpretable by a human, then other more efficient, or otherwise pre-existing formats were used.
Still good to remind folks that CSV is not the only answer.
Seems like a text editor could have easily solved this problem by making unit and record separators printable characters.
And then everybody involved needs to have a compatible text editor, one more piece of software to coordinate
I work with these types of files all day long and I would really welcome something like this. Most people think csv files are supposed to be imported by excel. I’m wondering if you could send in the ascii char to excel and it would delimit on that. Interesting.
so does unicode inherit these?
yes - http://www.unicode.org/charts/PDF/U0000.pdf
I have created a (work-in-progress) Vim plugin, that uses Vim’s conceal feature to visually map the relevant ASCII characters to printable characters.
It sort of works, but there are known issues which I have listed in the README.
Great point! I’m using 0x1F as field separator all the time when building cache keys or similar things, it’s much better than using , - / or other printables since that’s bound to break sooner or later.
There’s a few crazypants wire protocols around still which use ASCII ENQ EOT, ACK, NAK etc.
ASTM-E-1381 I’m looking at you here.
it’s funny to think that if someone had built FS/GS/RS/US support into one of the earlier text editors they might have taken off as part of an inline table format instead of the TAB-separated formats which became common.
But yeah, an interesting lesson that no matter how much cool stuff you bake into your standards spec, no-one’s going to read the damned thing properly anyway.