1. 36
  1.  

  2. 15

    I don’t buy most of his point.

    planes nosedive and kill 200 people

    It is still safer to be in a plane than driving to the airport.

    most gain is from hardware and not software

    From what I have seen, people are still capable of optimising software for performance when they need to, they just rarely do.

    things are unreliable

    Software is complex. It is the most complex thing civilization has built so you can expect them to break the most often.

    people don’t assembly anymore

    Some do. We only need so many of them the same way we only need so many pediatricians in a hospital.

    productivity hasn’t increased because features are not added

    If they had added features instead the software would be failing more often which is what you were complaining about 5 minutes ago…

    I’m only half way through but really I will summarise the situation the things he considers as good are just not so important to optimise in the current moment. Social issues, climate change, movement towards authoritarianism, AI, etc. these things are more likely to be a collapse event that webdevs not caring that his webshit takes up 5% CPU rather than 1%.

    1. 10

      One way to account for climate change is to stop building powerful computers. Make chips cheaper and less polluting to make, concentrate on low energy consumption, and perhaps even if it makes desktop computers 10 times slower than they are right now. In parallel, make software 100 times faster, like it used to be a couple decades ago (we used to think that wasn’t that fast, because computers were much slower). Cut unneeded feature such as fancy GUIs if you have to. It’s only a minor change, but by simplifying everything that way, we’ll make our civilisation a bit more resilient than it currently is.

      As for what’s important to optimise… we each have our own skills right now. I’m not sure a web dev can optimise those more pressing issues right now. But they can make the web site faster and less power hungry.

      1. 2

        One way to account for climate change is to stop building powerful computers.

        I’ll believe that when I see most people on the Internet talking big about green computing suffering through the use of small computers like Pi’s instead of their nice desktops, laptops, tablets, etc. Plus, exclusively recycling older hardware whenever it’s available. Most refuse to do those things citing some real or perceived benefits that they want to meet which demand harming the environment. Just like the people and businesses they complain about. They just ignore the environment to optimize different goals and metrics.

        Personally, I’m on a recycled Core i7. Needed i7 for current and future fixes for CPU vulnerabilities. Maybe also verification tools that run through GB’s of state. I kept my last laptop, a lightly-used Core Duo (or 2) that Dell made for Linux, for 7-8 years. I also build various appliances out of scrap PC’s. I don’t do much on power usage since I think that problem should be solved at supply, folks in my area won’t do it (“full steam ahead on fake warming!”), and so it wouldn’t make a difference by the numbers. That’s how I’m making my tradeoffs. It’s consistent with the pragmatic environmentalism I preach.

        While I’m at it, I encourage more people to make their next computer a recycled, like new, or whatever. Hardware got so fast that today’s stuff ran decent even on an 8 year old laptop. I’m sure whatever is 7 years to 1 day old will be anywhere from OK to great. ;)

        1. 3

          Well, if software wasn’t so damn slow, people wouldn’t suffer on the Pi. And I don’t think buying a Pi will make the problem solve itself. It has to happen en masse.

          That said, my laptop is 3 years old, my desktop is over 10 years old, and my palmtop (Nexus 5) is about 6 years old. I think I’m not doing too badly.

          1. 1

            Sounds like you’re doing pretty good. :)

            Far as en masse, that’s the reason I call these discussions virtue signaling or at least of no actual value. Stopping the problem would instead require campaigns to create mass change that adapted continually to their audiences, lots of product development to deliver better stuff to apathetic folks, and political campaigns pushing people and/or regulations that forced the matter. This would happen across the world.

            I think human nature will defeat itself on this one. So, I plan for climate change instead of trying to stop it. I try to reduce energy use and waste for other reasons. China’s new policy on waste just reinforced the importance of decreasing it.

        2. 1

          One way to account for climate change is to stop building powerful computers. Make chips cheaper and less polluting to make, concentrate on low energy consumption, and perhaps even if it makes desktop computers 10 times slower than they are right now.

          So your argument that to stop a collapse event we need to start what looks like the beginning of a collapse event (computers are slower, energy consumption is reduced etc.)? In that case we might as well pedal to the metal and wait for a ‘natural’ collapse instead.

          1. 7

            When my car approaches an obstacle, I prefer to apply the brakes rather than wait for the ‘natural’ outcome.

            1. 0

              In both cases, you are stopped. So the obstacle has served its purpose.

              Extending this analogy furthur breaks down the analogy.

              1. 1

                Rate of change, in both cases, is the difference. One is comfortable; the other, lethal.

            2. 3

              growth != progress. Growth for the sake of growth is a tumor and there’s nothing wrong with the stop of growth for the sake of growth.

              1. 1

                If you think humans have intrinsic value, how can more humans be bad? Unless you think there is an inflection point whereby every single human born is actually better dead than alive?

                1. 1

                  no, I don’t believe humans have intrinsic value. I don’t believe anything has intrinsic value. Humans give value. That said, I was talking about economical growth, not quantity of humans.

              2. 1
                1. Our economic output is currently comes from our energy consumption, and that trend doesn’t show any sign of changing. When a country has less energy, it also has a recession.
                2. We are nearing, on top of, or even or past peak fossil fuels. Energy from those will only decline going forward (or perhaps the decline will only start int 2030 at the latest. And I’m not even talking about other reasons to slow down right away, like climate change or pollution.
                3. I don’t see a new energy emerging any time soon. Unless perhaps nuclear fusion works, but to be honest I’m not holding my breath.

                So it looks like a collapse is inevitable. I was proposing we accompany it. Another way to accompany the collapse would be to stop planned obsolescence. That alone should cause a noticeable recession, though if done correctly should not worsen our lives in practice (well, except the likes of Apple). We could also slow down the collapse, by building more (fission) nuclear plants. They’re damn expensive, but they will last longer than oil. 100% renewable energy is obviously the future, but that’s likely also a future with less, probably much less energy than what we currently have.

              3. 1

                More powerful (and power hungry) chips may on balance save energy because we can solve optimization problems in logistics, manufacturing, etc.

                1. 1

                  Possibly. But this would only mean a small fraction of all chips: those that are used in factories transport companies, or anything that could save energy with more computation.

              4. 2

                Social issues, climate change, movement towards authoritarianism, AI, etc. these things are more likely to be a collapse event that webdevs not caring that his webshit takes up 5% CPU rather than 1%.

                i see your point but just wanted to point out that those things aren’t separate. bloated, opaque software has implications for climate change and authoritarianism.

              5. 10

                This video is about the crisis in overly complex and unreliable software, and what to do about it. Highly recommended.

                1. 6

                  the OS layer, which is this immensely complex thing, that we mostly don’t want

                  I hope this attitude doesn’t spread beyond game developers. Yes, the OS adds complexity, but it also adds things that users _need. One example is accessibility. If you draw your UI to the screen yourself, it can’t be made accessible to blind users through a screen reader. If you take input directly from a keyboard or mouse, without going through some kind of OS input stack, it can’t be made accessible to people with mobility impairments. Accessibility is only as good as it is (which is still not very good in a lot of places) because that immensely complex OS layer includes an accessibility API and at least one GUI toolkit that implements the API. In this area, software has definitely gotten better over the past couple of decades (at least native software; maybe the Web has gotten worse on average). That’s why I don’t have much nostalgia for retro computers.

                  1. 5

                    you’re supposing that accessibility has to be built on top of the GUI. i think it would be much better if accessible UIs could be installed instead of standard GUIs. why should a blind person have to download anything graphical at all?

                    if anything the accessible UI could be the basis for the GUI: a semantic description which can be understood by a blind person could also be translated into visual elements. yet modern practice does the opposite, at the cost of everyone (except the institutions we rely on to develop the shitty software).

                    1. 1

                      Accessibility is the one thing that I see on basically every thread that tries to propose simpler models than the ginormus GUI framework we currently have. This salient, yet fairly niche, benefit may be dwarfed by the cost, for everyone of using worse tools than they could have.

                      We don’t necessarily have to exclude impaired people. We could perhaps address their needs separately, without complicating the whole stack. It could cost less, and perhaps even benefit them more, than the status quo.

                      We can do better, and accessibility is not a reason not to try.

                      1. 3

                        We don’t necessarily have to exclude impaired people. We could perhaps address their needs separately, without complicating the whole stack.

                        How do you propose we do this? By developing separate applications for people with specific disabilities, rather than letting them use the same applications as their peers? Or do you have something else in mind?

                        To be clear, I don’t believe for a moment that accessibility is the only reason that GUIs are as complicated as they are. And I don’t even defend all of that complexity. (For instance, I don’t think that two-way data binding belongs in the operating system’s GUI toolkit.) But most of the time, when someone implements their own simple, light GUI toolkit, they throw away accessibility in the process. And that makes me want to scream every time I come across another one of those here or on HN. I want more people to recognize that some of the complexity they bemoan is serving a purpose, just not one that they can see on the screen.

                        1. 2

                          What would be cool is a UI toolkit that directly exposes a11y tools as first-class UI devices, alongside the usual keyboard/mouse/screen. For example, if you request a canvas widget, you may be given a “regular” true-color drawing surface, a screen reading device that only accepts text, or a high-contrast surface with only foreground, background, and highlight colors.

                          Developers would be encouraged to deal with as many devices as they can, they will have something they can easily test even if they’re not already using OS a11y features, and they can tune formatting themselves instead of the toolkit attempting to guess how best to convert between UI concepts. I think toolkits that attempt to abstract a11y away, or defer to the OS, are pushing the concerns in the wrong direction: away from application developers.

                          (I am interested in (but have not properly worked with or required) a11y tools and concerns, so I may be completely ignorant of something of key importance here.)

                          1. 2

                            Ah, Xlib programming circa 1992. You had to deal with everything from 2-color to true color and for the between, you had to specify shared colormaps (everything looks like a clown designed it) to private colormaps (per window, the one window looks fine, but the rest of the screen is taking an acid trip). That makes for a fun time testing.

                            1. 1

                              Don’t forget about the overlay plane!

                          2. 2

                            We could possibly segregate applications by disability, yes. Though that may be impossible in practice. Vendors are likely to address the bigger market and ignore any disability (except perhaps colour blindness). Then there’s the problem of interoperability. Any data that comes in or out of those application should be thoroughly and publicly documented. And to make those formats truly open, we need reference implementation of the core transformations that is done to them. So that route is closed to proprietary software.

                            Another route could be similar to the current one: just have the GUI toolkit cater to screen readers without application writers to even need to be aware of that secondary backend. As @jmk has suggested, GUI toolkits may benefit from such a secondary backend for other reasons, such as testing & automation. I think this is the more practical option.

                            It is also likely to hit a local maximum, for a number of reasons:

                            • Many application writers still ignore accessibility¹, and will be able to get away with their ignorance thanks to those helpful GUI toolkits.
                            • Application specific widgets are likely to be much less readable to a screen reader. The GUI toolkits will have to provide a wide array of widgets to improve on that front.
                            • Visualisations like graphs, histograms etc. are wonderful for seeing people, but don’t translate well to screen readers. Standardised widgets may turn them into columns of numbers, but if one is doing anything fancy, blind people are likely to stay blind.

                            Overall, I’d say the whole situation sucks. But if we manage to simplify everything to a point where personal computing fits in a few dozen KLOC (let’s say something below 200 thousand lines, and the 20K lines STEPS project seem to show that’s possible), then transforming it all in a version that helps disabled people will be a piece of cake.

                            Now, the issue of newer GUI toolkits that ignore disabled people. I don’t know the first thing about screen readers. I have no idea how they are supposed to hook into an application. I speculate, though, that it’s not as simple as it could be. That there are several ways to do it, and a thorough GUI toolkit may have to conform to all of them if they are to maximise accessibility (and I bet most don’t, forcing screen readers to implement several standards, half of them are probably badly designed or badly implemented). Simple GUI toolkits cannot stay simple and conform to that kind of crap. We’d have to unify and simplify screen reader stuff first. Possibly even unify screen reading and other kinds of interactions (again, see @jmk’s comment about that).

                            Is it as bleak as my wild ass guess says it might be? Or are we in a fairly good state already?

                            1. Me included. I plan to write a password manager and a file encryption utility, and will ignore accessibility altogether, unless perhaps someone reaches out to me explicitly. I may use Qt, but that’s about it. On the other hand, I will also write a command line version of those (no curses, only stdin, stdout, and stderr).
                            1. 1

                              Thanks for the thoughtful reply. This has gotten off on a tangent from the original submission, and I know it’s been a few days, but I want to answer your points..

                              We could possibly segregate applications by disability, yes. Though that may be impossible in practice. Vendors are likely to address the bigger market and ignore any disability (except perhaps colour blindness). Then there’s the problem of interoperability. Any data that comes in or out of those application should be thoroughly and publicly documented. And to make those formats truly open, we need reference implementation of the core transformations that is done to them. So that route is closed to proprietary software.

                              Exactly. And there’s also the risk that these separate apps would lack features compared to their mainstream counterparts. That’s why I and other accessibility advocates tend to discourage this approach.

                              Another route could be similar to the current one: just have the GUI toolkit cater to screen readers without application writers to even need to be aware of that secondary backend. As @jmk has suggested, GUI toolkits may benefit from such a secondary backend for other reasons, such as testing & automation. I think this is the more practical option.

                              To be clear, it’s not just about screen readers. There are also people with mobility impairments, e.g. people who need to use speech recognition or other methods of input. An accessibility API, like the ones that all mainstream platforms have now, helps these cases too.

                              Many application writers still ignore accessibility¹, and will be able to get away with their ignorance thanks to those helpful GUI toolkits.

                              To the extent that this actually works, it’s a good thing. We know we’re fighting an uphill battle when advocating accessibility. We’ll take whatever easy wins we can get.

                              Application specific widgets are likely to be much less readable to a screen reader. The GUI toolkits will have to provide a wide array of widgets to improve on that front.

                              Correct.

                              Visualisations like graphs, histograms etc. are wonderful for seeing people, but don’t translate well to screen readers. Standardised widgets may turn them into columns of numbers, but if one is doing anything fancy, blind people are likely to stay blind.

                              Yes, this is a challenge. I’m not aware of any widely adopted solution for this, at least not yet.

                              Overall, I’d say the whole situation sucks. But if we manage to simplify everything to a point where personal computing fits in a few dozen KLOC (let’s say something below 200 thousand lines, and the 20K lines STEPS project seem to show that’s possible), then transforming it all in a version that helps disabled people will be a piece of cake.

                              If you’re talking about having a separate simple platform for people with disabilities (or for people with a specific disability, e.g. blindness), I don’t think that would be well received these days. I can’t speak for all of us, of course, but a lot of us want to use the same apps and platforms as everyone else, to make sure that we don’t get left behind when it comes to accessing the latest features.

                              Now, if a usable platform could fit in a couple hundred KLOC or less and have both mainstream and accessible UIs in the same platform, that would be interesting. Given how much incidental complexity thyere is in mainstream platforms, including their accessibility layers, I wouldn’t say it’s impossible.

                              Now, the issue of newer GUI toolkits that ignore disabled people. I don’t know the first thing about screen readers. I have no idea how they are supposed to hook into an application. I speculate, though, that it’s not as simple as it could be. That there are several ways to do it, and a thorough GUI toolkit may have to conform to all of them if they are to maximise accessibility (and I bet most don’t, forcing screen readers to implement several standards, half of them are probably badly designed or badly implemented). Simple GUI toolkits cannot stay simple and conform to that kind of crap. We’d have to unify and simplify screen reader stuff first. Possibly even unify screen reading and other kinds of interactions (again, see @jmk’s comment about that).

                              On platforms other than Windows, the situation has been pretty sane for a while: The platform provgides a single accessibility API, the toolkit implements that, and all assistive technologies (screen readers, speech recognition, other alternative input methods) as well as automated testing tools consume that API.

                              On Windows (the platform I know best), the situation was historically a mess. Initially, starting around 1997, the official accessibility API was Microsoft Active Accessibility (MSAA). But it didn’t provide nearly enough information, most infamously in editable text controls. (I ran into that limitation around day 3 of prototyping a Windows screen reader in 2004.) So screen readers used a variety of other techniques. Internet Explorer and the MS Office applications had their own object models that weren’t even intended for accessibility, but provided the information that screen readers needed. For standard Win32 controls, like the edit and list view controls, screen readers could send window messages to the control. And for other applications, screen readers had to do nasty things like install their own fake display driver in order to build an “off-screen model” from GDI calls (e.g. TextOut). Other assistive technologies had their own hacks.

                              With Windows Vista, Microsoft introduced UI Automation. If I”m not mistaken, this API was also available for Windows XP as part of a package that you could optionally install; I don’t recall if it was ever shipped to all Windows XP installations through Windows Update. Third-party screen reader developers (including myself at the time) were slow to adopt it. I can’t speak for the others, but in my case, the reason was simply that we didn’t need it for any real application until at least Windows 7. Meanwhile, IBM and Mozilla went their own way with an extension of MSAA called IAccessible2 (IA2), and we third-party screen reader developers were quick to adopt it. The beauty of IA2 was that it was just a set of extra COM interfaces that could be adopted by applications and assistive technologies, independent of the OS version. So it ran perfectly well on Windows XP, regardless of which service pack and optional packages you had installed.

                              That’s all history though. These days, the remaining widely-used third-party screen readers have all adopted UI Automation. More importantly, Microsoft’s Narrator screen reader is now a serious option. So, at least when it comes to supporting screen readers, the solution is clear: implement UI Automation. I’m not so sure about other assistive technologies, e.g. speech recognition.

                              There’s one other complication though. All of the accessibility APIs are pull-based. That is, the client (e.g. screen reader) requests information, and the provider (application) needs to return it immediately. And they’re all built around a tree of UI elements. This works well for mainstream GUI toolkits. But it could be a problem for less common designs like immediate-mode GUIs.

                              I plan to write a password manager and a file encryption utility, and will ignore accessibility altogether, unless perhaps someone reaches out to me explicitly. I may use Qt, but that’s about it. On the other hand, I will also write a command line version of those (no curses, only stdin, stdout, and stderr).

                              My understanding is that Qt’s built-in accessibility support is now decent on desktop platforms. I’d be happy to test your tools and let you know how they turned out.

                              1. 1

                                Now, if a usable platform could fit in a couple hundred KLOC or less and have both mainstream and accessible UIs in the same platform, that would be interesting.

                                That was the idea. The path I envisioned to get there was to mainstream a simple platform to begin with, with the hope that a usability API on top of that wouldn’t cost much to make. First though, simple must become mainstream. I don’t see that happening any time soon, unfortunately. Unless perhaps hardware vendor start solving the 30 million lines problem (the need for multi million lines kernels because hardware is too diverse and under/not specified).

                                There’s one other complication though. All of the accessibility APIs are pull-based. That is, the client (e.g. screen reader) requests information, and the provider (application) needs to return it immediately. And they’re all built around a tree of UI elements. This works well for mainstream GUI toolkits. But it could be a problem for less common designs like immediate-mode GUIs.

                                IMGUI may fare better than we might think. Conrod for instance retains a widget graph under the hood for performance reasons. The tree of UI elements is already there. Adding an accessibility layer might be less difficult than it could have been for a truly immediate mode GUI.

                                I’d be happy to test your tools and let you know how they turned out.

                                That will take some time. Like, a few months at best. I’m still working on the crypto library that will underpin those tools (key exchange protocols specifically, see my work in progress here and there). But I will definitely announce them here.

                          3. 3

                            Accessibility is not a niche thing, the same mechanism allows instrumentation and automation (e.g. selenium/webdriver).

                        2. 5

                          I’m dubious about this. We are standing on too many levels of abstraction but the good old days weren’t. From people who worked in the 50s and 60s running a program on a different computer used to mean rewriting the whole thing and ending up with something that was 60% similar to the original, either because if hardware limitations or because someone thought up new data structures.

                          1. 3

                            I’m of two minds about it…

                            • On the one hand, someone said that Jon Blow forgot about the BSOD in the 90’s. I agree, since I learned how to use computers on Microsoft OS’s in that era, and yes it was pretty damn unreilable.
                            • On the other hand, to use some old hardware, I occasionally use Windows XP in a Macbook Air under Virtualbox. This thing boots fast! And I’m only giving it 256 MB of RAM? The XP RAM requirements were 128 MB or something? And it has a fast low-latency GUI! It’s kinda crazy.

                            So yeah the experience of using Windows XP is a bit startling (because I used it for around a decade, but switched off around 2009 or 2010). It’s certainly more tightly coded than the Ubuntu desktop or even the OS X desktop. (I haven’t used modern Windows in a long time, but it looks pretty bad …)

                            Honest question: What does OS X or Ubuntu desktop do for me that Windows XP didn’t? I think very little. Windows XP was the first time I used ant-aliased fonts, and I think that was the last real improvement :-) FWIW the reason I use Ubuntu is because of CLI, and because it has good hardware support. It’s definitely not because of the GUI.

                            I guess what I would say that there has been progress, but it hasn’t been consistent or evenly distributed, and occasionally we go backward.

                            (I haven’t watched the video yet, but I plan to.)

                            1. 4

                              There were a lot of problems with Windows XP, which have been addressed to varying degrees by later versions of Windows and other operating systems. For example:

                              • The GDI (Graphics Device Interface), including such complex things as text rendering and font file parsing, was in the kernel, along with all display driver code, for performance. BTW, you may be surprised to learn that in those bad old days, most screen readers for the blind would install their own fake display driver in order to find out what was on the screen – not at the level of pixels, but text and some shapes. (The fake display driver would then call into the real display driver, a practice we called driver chaining. It could get messy if you installed more than one screen reader.) They couldn’t get quite all of the information they wanted by hooking/patching GDI in user space because…

                              • In the same vain, window management and the default window decorations were in the kernel. Need I say more?

                              • There was no per-application sandboxing. If it had even been attempted, there probably would have been no end of privilege escalation exploits, because all of the stuff that ran in the kernel (described above) presented a huge attack surface.

                              • Security in general was weak, especially pre-SP2, but even after. Remember the Sony CD rootkit?

                              And there’s probably a lot more. My point is that sometimes we have to give up low resource consumption for more important things like robustness and security.

                              1. 6

                                And there’s probably a lot more. My point is that sometimes we have to give up low resource consumption for more important things like robustness and security.

                                Pretty sure that’s a false dichotomy.

                                1. 1

                                  As I said here, the fact that sharing is required for high performance will typically increase the size and energy use of anything using isolation and parallelism to achieve similar throughput. If that’s even possible given some things share by necessity. Then, the fact that security may require runtime checks or inefficient layouts further reduces performance of secure things vs insecure things. The schemes to mathematically verify absence of many problems often require simplifications of the program that can negatively impact performance. Whereas, many things that increase performance are harder or impossible to verify with current tooling. Finally, defeating many hardware-induced problems might require old, process nodes that use vastly more energy and resources to do same amount of work as today’s nodes. Maybe also fewer power management techniques that can affect correct execution and/or cause leaks.

                                  So, that’s true to a large degree. That doesn’t even consider monitoring/management, redundancy, and recovery mechanisms that usually come with “robustness.” That would make the counterpoint too easy.

                                  1. 1

                                    I’m more than “pretty sure”:

                                    The point of the talk is that a decrease in complexity typically increases robustness, not decreases. Fewer moving parts means fewer places where problems can occur and spread. “Low complexity” and “Low resource usage” usually get along well, but there will be a point where you’ll have to give up on one to increase the other. This is pretty much inevitable when optimizing for more than one variable. Does this mean it’s futile and everyone should just switch to Electron? Not exactly, because that ‘branching’ point is usually quite far down the road, as I’ve learned from experience.

                                    Let’s say you want to optimize for program size on disk. The first thing you should do, is to implement the program in the most direct way possible, without layers of abstraction the code has to go through. (This also helps with performance, as fewer cycles would be wasted on that as well.) The second step is to strip the executable, which is still a decrease in complexity, although a smaller one, because you’re simply removing useless information from the binary. If you then want to continue, you’ll have to use some kind of executable code compression, optimizing dynamic linker, etc. Only here is where the complexity starts to increase.

                                    Equating “reduced complexity” to simply going back to the past, with all its problems, is quite naive, as described in this essay by viznut:

                                    When I mentioned “the 1996 level”, many readers probably envisioned a world where we would be “stuck in the year 1996” in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.

                                    (Emphasis mine.)

                                  2. 1

                                    Those are all things that excite developers but almost no end users care about.

                              2. 5

                                Maybe it’s just my bias, but I believe that part of the problem is not only proprietary software but also hardware. As he mentions, hardware is really what many software advances are built on, but most of this hardware is obscured and hidden not only from a user (when it comes to replacing parts of a system) but also a developer (when it comes to extending/hacking with the existing system). It’s analogous to how software companies work on user-unfriendly things like tracking and consumption predictions – maybe that explains why Facebook or Twitter don’t seem to add many new features per engineer as one would expect.

                                1. 2

                                  As he mentions, hardware is really what many software advances are built on, but most of this hardware is obscured and hidden not only from a user (when it comes to replacing parts of a system) but also a developer (when it comes to extending/hacking with the existing system).

                                  Notably, the GPU problems he mentioned could very likely be solved by more open hardware. If you could just write a metal/GLSL/HLSL compiler for all your GPUs which was as good as every other frontent, you wouldn’t need to rewrite your shaders for each platform.

                                  1. 2

                                    The shaders are only a small detail in the complexity. In order to get a GPU to work, you first need to talk ACPI and PCI-something in order to reach the card at all. Then you have to perform some voodoo to turn on the device. If that went well, you need to set up some DMA buffers to be able to send commands to the GPU, then use those to send some more initialization commands. Once that’s done, you can try displaying some actual pixels, which needs some more (model-dependent) yak shaving.

                                    Even if all this were standardized, it would still suck.

                                2. 2

                                  Another speech/text/.. that seems to echo The resource leak bug of our civilization (lobste.rs thread) by viznut. Interesting how these ideas are starting to become more widespread. (viznut also has a few followup essays.)

                                  1. 6

                                    It’s not really that interesting. People like garybernhardt and Alan Kay have been complaining about computers for years, and it has become a sort of mating call for thought-leaders.

                                    I’m ambivalent about it myself. I get frustrated with deficiencies, but to say that “software sucks” and this kind of thing is not a very useful standard. It’s like complaining that reality sucks.

                                    1. 3

                                      Alan Kay backed up his claims with Smaltalk, and much later, the STEPS project. That last one is especially interesting: it shows that an entire OS, complete with a self hosting compiler collection, 2D vector graphics, TCP/IP, desktop publishing, web-like browsing and mail-like communication, can be done in 20,000 lines of code.

                                      The driver problem, which explains most of the kernel bloat so far can be remedied by hardware companies eager to set a standard (not unlike the de-facto standard x86-64, or IBM/360 mainframes, or PS2 mouses and keyboards).

                                      1. 4

                                        it shows that an entire OS… can be done in 20,000 lines of code.

                                        Sort of. It’s an impressive work. However, it’s not an OS with the features most people want, compatible with existing apps, having the security features, the fully-optimizing compilers for performance workloads, and so on. Every one of those things drives the complexity and code size up. I always wondered how big STEPS would be if it had everything like that plus apps comparable to LLVM, VLC, and Firefox. If you say “That’s not the OS!,” then I’ll remind you most people won’t use an OS without a web browser or media player. So, STEPS would be a platform rather than OS if competitive.

                                        1. 2

                                          compatible with existing apps

                                          Forget about it, such compatibility is what causes much of the bloat. Our current formats suck big time, and require way too much code to handle. The only way out is to phase out the formats that suck the most, and gradually replace them by something simpler. That simpler thing will not necessarily be any less capable, by the way.

                                          having the security features

                                          Most security comes from correctness. Most correctness comes from simplicity. “Security features” will only be a very small part of the whole thing. Even if you’re thinking high-tech stuff like encryption, know that I have assembled a crypto library in less than 2,000 lines of C code. Quite the beast by STEPS standards, but much simpler than even Libsodium, and unlike TweetNaCl the performance is still pretty good.

                                          fully-optimizing compilers for performance workloads,

                                          OK, that one is more difficult. Still, their stuff performed well enough on a laptop, so the low hanging fruits should not cost too much. Optimisations are a game of diminishing returns. I wonder where the costs start to outweigh the gains.

                                          I always wondered how big STEPS would be if it had everything like that plus apps comparable to LLVM, VLC, and Firefox.

                                          I don’t know what you mean by “comparable”, but they already have compiling and browsing capabilities. I don’t recall if they have a video player. If by “comparable” you mean “can read HTML/CSS/JavaScript, and “Generates blazing fast code for lots of architectures”, then of course they’re not even close.

                                          What do we really want, though? Fast code looks good, but HTML/CSS/JavaScript seem much more of a stretch. I don’t care the slightest about HTML, I just want to browse a web. Any web. The one we’re stuck with just happens to be bloated beyond repair.

                                          STEPS would be a platform rather than OS if competitive.

                                          Any platform can be an OS if the driver problem is fixed (see Casey’s talk I linked above). That should allow some niches (such as video games) to branch off in their own incompatible world without causing too much trouble. I’m not sure what could come of it, but at least we’ll be able to experiment.

                                  2. 1

                                    Great discussion. I think he acknowledges the right symptoms but completely misses the point when it comes to the core problem. The root cause of all his examples is not that the global technological skill is degrading but that the systems we work on now have more requirements than they had decades ago and thus have a higher inherent complexity. He then misses that point again by proposing that we simplify, failing to see that we would need to dial down the requirements first, which we can’t.

                                    I’d say that complexity of a system increases exponentially with the addition of requirements and the (maybe even relatively small) increase in requirements of the last two decades has raised the inherent complexity of software and its ecosystem beyond our capacity to deal with it, resulting in all the problems he describes.

                                    To fix this, a completely new skillset needs to evolve. Overcoming our urge to overcompensate for uncertainty with proposing grand, not-fully-understood ideas. Fixing the social mechanism of having high regard for people who’s reasoning we don’t understand, assuming they are smarter than we are. Recognizing the paradox of needing optimism to embark on an innovative journey while at the same time enthusiasm limits our ability to fully appreciate risks. Fundamental issues with the way (groups of) humans work, some of which may work the way they do because of millions of years of evolution.

                                    And I do believe that software developers are uniquely positioned for taking on this challenge since we are the people dealing with complexity the most on a daily basis.