1. 4

Part Two

  1.  

  2. 5

    Interesting to look backward.

    The article gives GUI-builders that work via code generation as its concrete example of trying to simplify the experience of programming. I suppose they do exemplify that, but I think it’s worth trying to separate the intertwined issues.

    The author talks about a tension between not wanting to encourage programmers to ignore code, but on the other hand having a large chunk of code that really doesn’t require human comprehension. The details of instantiating fifty widgets and setting their coordinates are quite boring, and if the generator itself doesn’t have a bug, its output is very unlikely to ever need debugging. So having the code buys you nothing. It sounds as though this particular system was not hugely friendly to making changes later, but I’m not familiar with the specifics.

    My answer to that is fundamentally that things that are boring should be data, not code; then the tension vanishes. Today, window layout on most platforms is kept in a platform-specific opaque-ish data file, and the system library reads it and does the tasks that it sounds like this system was doing by code generation. Anything not extremely performance-intensive should be able to take that approach.

    I’d go so far as to say that having code that isn’t meant for human consumption is strongly counterproductive and clutters a codebase.

    Another of the author’s major points is about forgotten layers of abstraction. I find this a quite important and relevant topic, even more so than in 1998. Each layer is a barrier to changes; also, each layer is a potential home to security vulnerabilities.

    A fascinating aspect of Shellshock (wikipedia) to me was that nobody would have thought of bash as part of a web server’s attack surface; this happened because environment variables have a loose inheritance model that was invented 45 years ago, and for a long time they’ve been thought of as uninteresting implementation details.

    Similarly, the fact that system() always runs a shell, and that that can raise security issues, was widely-discussed security advice…. in the mid-90s. Since then, everybody’s gotten fairly sick of the verbosity that C requires, and unfortunately the secure alternative to system() is often half a dozen function calls which each need their error codes checked and handled, and I’d be surprised if anyone is worried about it even when we’ve just had a high-profile vulnerability for which it was a contributing factor.

    And anyone could be forgiven for forgetting about the CGI interface itself, and how it turns HTTP request fields into environment variables. It’s perfectly reasonable to not think about it, considering that it’s been informally deprecated for many years now, in favor of FastCGI, SCGI, reverse-proxied HTTP, and other approaches which are higher-performance.

    So bash never got secured because everybody on the desktop and server has given up on preventing local privilege escalation, which I have to admit is a fair position. When mobile-OS releases aren’t rooted within a month, it will make sense to revisit this.

    A really neat thing to think about here, though: In some sense, don’t we need to forget the problems of the past? Electrochemistry is an active research area which there are plenty of specialists in, but it’s completely reasonable for the authors of a desktop operating system to trust that the manufacturers of laptop batteries won’t change anything they need to know about. It’s also likely (I hope :)) that in a few decades, batteries will have achieved efficiency that doesn’t require compromises, and there will be far fewer people researching them. At some point, technological problems get solved, and don’t need further attention.

    One thing the author says that I imagine she’s happy to see has changed since then, if she’s thought about it: The marketing theme of “making programming easy enough for anyone” has gotten a lot quieter. This is probably because the supply of software engineers is a lot more in line with the demand. I would agree that today’s systems are slightly easier to get started with than they were in the 90s (though still a lot harder than they were in the 80s), but I don’t think that’s what’s changed; there’s been a cultural attitude shift. Programming is no longer a career people are perceived as weird to want, and people are at least beginning to think about how to teach it more effectively. And there is no longer an attitude that computers are a fad and will go away, which certainly makes it easier for people to choose careers in technology.

    The idea of trying to invent ways for non-programmers to program, really, was not about glorifying ignorance; it was about the idea that it would be nice not to have to pay programmers. I don’t think that was clear when the article was written, but it’s clear today, and it’s good that it’s changed.

    1. 3

      The marketing theme of “making programming easy enough for anyone” has gotten a lot quieter.

      I think theme has shifted to “anyone can learn to code”, and now it’s expressed in voctech schools instead of software products. They’re currently promising career switches but I wonder if, after the current wave of more schools come online, they’ll exhaust the supply of switchers and start offering programs geared towards basic competency in programming to augment existing careers.

      I think this ties into the heavy static code generation that was popular in the 90s but, as you note, difficult to maintain. Better to improve the language and tools so you can work up a layer of abstraction.

      These two shifts both seem like healthy changes in the field.

      1. 1

        Right, I was hoping it was clear what I meant. :) Exactly; it’s now a theme of empowerment through education, rather than through… er… however you’d describe that. I do think it’s much healthier. I am actually hopeful that it will play out the way you suggest.

        I’m not totally clear that the issues are connected, but they may well be. Certainly the upstream author thought so. It’s interesting to think about.

      2. 2

        The idea of trying to invent ways for non-programmers to program, really, was not about glorifying ignorance; it was about the idea that it would be nice not to have to pay programmers. I don’t think that was clear when the article was written, but it’s clear today, and it’s good that it’s changed.

        I’m not sure it’s changed in the sense of people no longer being interested in it. If anything I think it’s just gotten more domain-specific and normal, so things once requiring a programmer no longer require a programmer, and that’s just how things work now. And yeah, money is often a motivator. For example some things once done by people called “gameplay programmers” are now done by people called “game designers”, as GUI tools (the Unity editor, UDK’s Kismet, Behavior Tree editors, etc.) make more and more things possible without having to manually code it up. Arguably this is still actually programming, in the sense of specifying part of the game logic, but it’s seen as different from “regular” programming, and done by different people.

        I think it will continue to be mostly domain-specific though. A prediction, but one I’m not 100% confident in: a lot of the current uses of Python in the natural sciences, especially the very common, semi-standardized ones, will be replaced by things that look less explicitly like programming.

        1. 2

          Right, and good examples. The other major difference with domain-specific automation is that it actually works. :)