Why do people make the byte order mistake so often? I think it’s because they’ve seen a lot of bad code that has convinced them byte order matters.
I think it’s also because it’s often just convenient to write byte-order dependent code. You need to serialize something and only develop for x86 anyways, so just write out a packed struct!
At some point, you add support for a big endian architecture. You’re busy adding #ifdefs for that target anyways, so it appears easier to keep the original code as-is and byte-swap everything.
I think it’s also because it’s often just convenient to write byte-order dependent code. You need to serialize something and only develop for x86 anyways, so just write out a packed struct!
At some point, you add support for a big endian architecture. You’re busy adding #ifdefs for that target anyways, so it appears easier to keep the original code as-is and byte-swap everything.
Related: Is big-endian dead?
https://news.ycombinator.com/item?id=16187939