My major grief with many modern Interfaces especially on Mobile Apps, is the unbearable UI Response Latency, Everything feels like it takes just a little tick to long.
I started computing on an 8Mhz 286, and even on this machine it was possible to display a menu without noticable delay, and navigate through (although text based) a forms application without noticeable delay (Example: https://ilyabirman.net/meanwhile/all/ui-museum-turbo-pascal-7-1/)
Now I would expect that in this brave new World of Ghz powered computing devices everything would popup instantly. But no, now even the most simple think, like looking up the schedule of a train or bus, takes a several roundtrips to some cloud server, and with spotty Mobile Internet Access, the result is what you would expect.
And what was once an application that could do most of what was expected offline, is now an “App”, which is often not more than a glorified Webpage, because what use is it, installing multiple megabytes of binaries, when the basic mode of operation is to connect to the internet for every single user interaction anyways.
I completely agree. Unfortunately these things often get dismissed as “that’s just wrong memory” or simply by acting like one is stuck in the past or anti-progressive, clinging on to old stuff or unwilling to learn or get used to new stuff.
I feel similar in regards to DevOps or front end development. We used to have fast compiling/transpiling and interpreted languages outside of C. Now we compile JS, reinstall dependencies, spin up new containers and VMs for often changes in comments.
That said I am all for keeping things simple, but we had make and while it isn’t perfect we keep reinventing it in far inferior versions mostly cause we don’t like the syntax or some little quirk.
Yes there is network latency now but it’s usually not the facror causing this. We had ego shooters for decades now. And multiplayer games with thousands of concutreny players on servers sized smaller than the smallest instances any major cloud provider will offer.
At the same time we developed tools and improved protocols and software to be more efficient. Everything became zero copy, zero roundtrip, conpilers optimize every bit, we have compression to speed up in very different use cases, to be viable where it wasn’t before, hardware now supports cryptography, but at the application layer we far to often end up throwing that out the window, for what are often only claimed benefits.
To come back to the topic. I think that what both the design and more technical side have in common is that we get lost in dogmas. We learn X is good, but leave out the question whether it is good in a specific case. When you follow “this is the right way to approach it in 90% of the cases” rules, but make such decisions on a daily or even weekly basis you will ultimately end up making many wrong decisions.
However since these dogmas are common knowledge that everyone knows and follows one has a hard time arguing for the 10% case when it comes up. Especially because it it wasn’t the right road or something else doesn’t work out this will be what one points to.
I think it would be necessary to dare to take other roads than anyone else goes. Just because it works for Apple doesn’t make it works for you. It is not like as if all successful companies and projects work the same, and even when they do, it doesn’t mean it’s the only way.
And going different roads is what creates innovation. I think this is true for design as well as for other fields.