Unfortunately history doesn’t point to great outcomes with respect to standards and interoperability. Ten or 20 years ago the big battle was interoperable word processing and spreadsheets, not web browsers.
That issue seems less important now, but it’s still a problem, and it’s still unresolved.
Programmers use text and markdown so they probably don’t feel it, but lots of the world still runs on Word and Excel. Newer companies likely use cloud solutions which ironically don’t have STABLE formats, let alone OPEN ones (and occasionally break).
We still have a bunch of silos and “fake” standards. That is, stuff that’s too complicated for anyone besides a single company to implement. I haven’t really followed this mess, but it appears to be still going on:
I’ll also point out that POSIX shell is way behind the state of the art… There is a ton of stuff that is implemented in bash, ksh, zsh, busybox ash, and Oil that’s not in the standard. I’d estimate you could double the size of the POSIX shell standard with features that are implemented by multiple shells.
It’s a lot of work and I think basically nobody has the time or motivation to argue it out.
I’m still publishing spec tests that show this, and that other shell implementers can use. Example:
All that stuff could be in POSIX. Actually I noticed busybox ash is also copying bash – so bash is a new de facto standard with multiple implementations.
Likewise I’ve documented enhancements for other shells to implement:
Related article about how not just the POSIX shell but the POSIX operating system APIs have become outdated:
Basically OS X, Android, and Ubuntu are all build on top of POSIX, but it’s not really good enough for modern applications, so they have diverging solutions to the same problems. This can be “working as intended” for awhile, but if it goes on for too long, then the lack of interoperability impedes progress.
Impediments to progress are sort of relative to some genuinely viable alternative. If there isn’t one, or people aren’t aware of one, we just keep slogging through the tar pits. Those who grow up in the tar pits don’t even notice.
iOS as well, obviously, but that’s a different beast to test, so.
iOS disallows stuff that’s part of POSIX, like fork, so even if all of the facilities are present, they aren’t part of the public API. So it’s not a POSIX OS.
That’s an even better observation.
Macos apparently warns you if you do anything after fork except immediately execing.
There is a good question if we can even have simplicity among things. I am here typing this in a web browser, my code editor is an editor built on top of an self hosted web browser in an app façade. The operating system takes gigabytes of storage space.
I think simplicity may be fairly difficult to achieve, and it is not that obvious in early stages.
I think of simplicity as a result of ‘compression/normalization’ of various structures that are created in a lifecycle of a system.
In my experience, simplicity is rarely something I come up with initially. Instead it is, basically, a result of refactoring, normalization and ‘lossless’ (hopefully) compression of goals.
So, if to work a project, it needs to be ‘simple’ this may be far down the line in its lifecycle (assuming that simplicity is one of the top level goals).
We are collectively addicted to complexity. We live it. Breathe it. Curse it. And then create more, all while telling ourselves that this added complexity is absolutely necessary. Or worse, making vague references to ‘pragmatism.’
It’s hard to break free of this. And it’s hard to tell what complexity is truly necessary, and what is a defense mechanism for our own laziness.
To follow on your thought, may be complexity is an outcome of gradual professional specialization.
When I talk or read work of professionals in the field I have cursory understanding of, it is often eye opening/mindboggling kind of experience.
I remember years ago, when I started to understand IAM (Identity Access management), Access control.
There was gradual introduction of complexity there (recognized both by academics and professionals), and the subsequent field partitioning/tooling that came around – are now very complex. If you take a look at XACML-based access control models, SIEM (security information event management) systems, identity federation, provisioning, and so on – there is a lot. It takes years to understand, and even more years to become fluent in this.
Could this be simplified to whatever we had in 70s? I am not sure. Because I am not sure that this type of complexity, brought up by needs, is avoidable.
I can see that as humans, the way we dealt with this complexity, was by specialization. We, sort of, recognized the complexity, and created a practice around it with people, tools, even companies around it.
From there on, I am sure we also make mistakes. Where we try apply the tools onto the problem that do not belong to that specific field. That creates, what I would call: ‘fitment dissonance’.
One example in the field of Access Control, is to use access control for Workflow management.
This is where folks view access control as Repository of various ‘enterprise user flags and attributes’ and then use that ‘bag of tricks’ to model Enterprise workflows (for example manual error correction workflows), where Access control system is used to store attributes and delegation models for manual data cleansing (which is executed with support of Enterprise workflow tools).
Another example, was using ‘hadoop’ everywhere as a default solution to ‘I do not really understand my data’ problems.
To summarize, we deal with unavoidable complexity by professional specialization.
Professional (and subsequent tooling) specializations have been abused/misused to solve problems outside of the perimeter of the specific specialty. Those misuses add to what I would call ‘avoidable complexity’.
Finding, retaining and training people that can that kind of judgement calls is expensive (as it requires years of experience, constant self-education and focused experience diversification).