1. 42
    1. 16

      Part of this is ‘computers are fast now’. I distinctly remember two such moments around 10-15 years ago:

      The first was when I was working on a Smalltalk compiler. I had an interpreter that I mostly used for debugging, which was an incredibly naïve AST walker and was a couple of orders of magnitude slower than the compiler. When I was doing some app development work in Smalltalk, I accidentally updated the LLVM .so to an incompatible version and didn’t notice that this meant that the compiler wasn’t running for two weeks - the slow and crappy interpreter was sufficiently fast that it had no impact on the perceived performance of a GUI application.

      The second one was when I was writing my second book and decided to do the ePub generation myself (the company that the publisher outsourced it to for my first book did a really bad job). I wrote in semantic markup that I then implemented in LaTeX macros for the PDF version (eBook and camera-ready print versions). I wrote a parser for the same markup and all of the cross-referencing and so on and XHTML emission logic in idiomatic Objective-C (every text range was a heap-allocated object with a heap-allocated dictionary of properties and I built a DOM-like structure and then manipulated it) with a goal of optimising it later. It took over a minute for pdflatex to compile the book. It took under 250ms for my code to run on the same machine and most of that was the process creation / dynamic linking time. Oh, and that was at -O0.

      The other part is the user friendliness of the programming languages. I’m somewhat mixed on this. I don’t know anything about the COCO2’s dialect of BASIC, the BASIC that I used most at that time was BBC BASIC. This included full support for structured programming, a decent set of graphics primitives (vector drawing and also a teletext mode for rich text applications), an integrated assembler that was enough to write a JIT compiler, full support for structured programming, and so on. Writing a simple program was much easier than in almost any modern environment and I hit limitations of the hardware long before I hit limitations of the language. I don’t think I can say that about any computer / language that I’ve used since outside of the embedded space.

      1. 6

        Hm interesting examples. Though I would say Objective C is screamingly fast compared to common languages like Python, JavaScript (even JITted), and Ruby, even if it’s idiomatic to do a lot of heap allocations.

        Though another “computers are fast” moment I had is that sourcehut is super fast even though it’s written entirely in Python:

        https://forgeperf.org/

        It’s basically written like Google from 2005 (which was crazy fast, unlike now). Even though Google from 2005 was written in C++, it doesn’t matter, because the slow parts all happen in the browser (fetching resources, page reflows, JS garbage collection, etc.)

        https://news.ycombinator.com/item?id=29706150

        Computers are fast, but “software is a gas; it expands to fill its container… “


        Another example I had was running Windows XP in a VM on a 2016 Macbook Air. It flies and runs in 128 MB or 256 MB of RAM! And it has a ton of functionality.


        This also reminds me of bash vs. Oil, because the bash codebase was started in 1987! And they are written in completely different styles.

        Oil’s Parser is 160x to 200x Faster Than It Was 2 Years Ago

        Some thoughts:

        • Python is too slow for sure. It seems obvious now, but I wasn’t entirely sure when I started, since I could tell the bash codebase was very suboptimal (and later I discovered zsh is even slower). The parser in Python is something like 30-50x slower than bash’s parser in C (although it uses a Clang-like “lossless syntax tree”, for better error messages, which bash doesn’t have. It also parses more in a single pass.)
        • However adding static types, and then naively translating the Python code to C++ actually produces something competitive with bash’s C implementation! (It was slightly faster when I wrote that blog post, is slightly slower now, and I expect it to be faster in the long run, since it’s hilariously unoptimized)

        I think it is somewhat surprising that if you take some Python code like:

        for ch in s:
          if s == '\n':
            pass
        

        And translating it naively to C++ (from memory, not exactly accurate)

        for (it = StrIter(s); ch = it.Next(); !it.Done()) {
          if str_equals(ch, str1) {  // implemented with memcmp()
             ;
          }
        }
        

        It ends up roughly as as fast. That creates a heap allocated string for every character like Python does.

        Although I am purposely choosing the worst case here. I consciously avoided that “allocation per character” pattern in any place I thought would matter, but it does appear at least a couple times in the code. (It will be removed, but only in the order that it actually shows up in profiles !)

        I guess the point is that there are many more allocations. Although I wrote the Cheney garbage collector in part because allocation is just bumping a pointer.

        The garbage collector isn’t hooked up yet, and I suspect it will be slow on >100 MB heaps, but I think the average case for a shell heap size is more like 10 MB.


        I think the way I would summarize this is:

        • Some old C code is quite fast and optimized. Surprisingly, Windows XP is an example of this, even though we used to make fun of Microsoft for making bloated code.
          • bash’s code is probably 10x worse than optimal, because Oil can match it with a higher level language with less control. (e.g. all strings are values, not buffers)
        • Python can be very fast for sourcehut because web apps are mostly glue and I/O. It’s not fast for Oil’s parser because that problem is more memory intensive, and parsing creates lots of tiny objects (the lossless syntax tree).
        1. 5

          Though I would say Objective C is screamingly fast compared to common languages like Python, JavaScript (even JITted), and Ruby, even if it’s idiomatic to do a lot of heap allocations.

          Yes and no. Objective-C is really two languages, C (or C++ for Objective-C++) and Smalltalk. The C/C++ implementation is as good as gcc or clang’s C/C++ implementation. The Smalltalk part is much worse than a vaguely modern Smalltalk (none of the nice things that a JIT does, such as inline caching or run-time-type-directed specialisation). The code that I wrote was almost entirely in the Smalltalk-like subset. If I’d done it in JavaScript, most of the thing that were dynamic message sends in Objective-C would have been direct dispatch, possibly even inlined, in the JIT’d JavaScript code.

          I used NSNumber objects for line numbers, for example, not a C integer type. OpenStep’s string objects have some fast paths to amortise the cost of dynamic dispatch by accessing a range of characters at once. I didn’t use any of these, and so each character lookup did return a unichar (so a primitive type, unlike your Python / C++ example) but involved multiple message sends to different objects, probably adding up to hundreds of retired instructions.

          All of these were things I planned on optimising after I did some profiling and found the slow bits. I never needed to.

          Actually, that’s not quite true. The first time I ran it, I think it used a couple of hundred GiBs of RAM. I found one loop that was generating a lot of short-lived objects on each iteration and stuck an autorelease pool in there, which reduced the peak RSS by over 90%.

          bash’s code is probably 10x worse than optimal, because Oil can match it with a higher level language with less control. (e.g. all strings are values, not buffers)

          I suspect that part of this is due to older code optimising for memory usage rather than speed. If bash (or TeX) used a slow algorithm, things take longer. If they used a more memory-intensive algorithm, then the process exhausts memory and is killed. I think bash was originally written for systems with around 4 MiB of RAM, which would have run multiple bash instances and where bash was mostly expected to run in the background while other things ran, so probably had to fit in 64 KiB of RAM, probably 32 KiB. I don’t know how much RAM Oil uses (I don’t see a FreeBSD package for it?), but I doubt that this was a constraint that you cared about. Burning 1 MiB of RAM for a 10x speedup in a shell is an obvious thing to do now but would have made you very unpopular 30 years ago.

          1. 2

            Yeah the memory management in all shells is definitely oriented around their line-at-a-time nature, just like the C compilers. I definitely think it’s a good tradeoff to use more RAM and give precise Clang-like error messages with column numbers, which Oil does.

            Although one of my conjectures is that you can do a lot with optimization at the metalanguage level. If you look at the bash source code, it’s not the kind of code that can be optimized well. It’s very repetitive and there are lots of correctness issues as well (e.g. as pointed out in the AOSA book chapter which I link on my blog).

            So Oil’s interpreter is very straightforward and unoptimized, but the metalanguage of statically typed Python + ASDL allows some flexibility, like:

            • interning strings at GC time, or even allocation time (which would make string equality less expensive)
            • using 4 byte integers instead 8 byte pointers. This would make a big difference because the data structures are pointer rich. However it tends to “break” debugging so I’m not sure how I feel about it.
              • Zig does this manually but loses type safety / debuggability because all your Foo* and Bar* just become int.
            • Optimizing a single hash table data structure rather than the dozens and dozens of linked list traversals that all shells use

            All of these things are further off than I thought they would be … but I still think it is a good idea to use the “executable spec” startegy, since codebases like bash tend to last 30 years or so, and are in pretty bad shape now. At a recent conference the maintainer emphasized that the possibility of breakage is one reason that it moves relatively slowly and new features are rejected.

            One conjecture I have about software is:

            • Every widely used codebase that’s > 100K lines is 10x too slow in some important part, and it’s no longer feasible to optimize
            • Every widely used codebase that’s > 1M lines is 100x too slow in some important part, …

            (Although ironically even though bash’s man page says “it’s too big and too slow”, it’s actually small and fast compared to modern software!)

            I think this could explain your pdflatex observations, although I know nothing about that codebase. Basically I am never surprised that when I write something “from scratch” that it is fast (even in plain Python!), simply because it’s 2K or 5K lines of code tuned to the problem, and existing software has grown all sorts of bells and whistles and deoptimizations!

            Like just being within 10x of the hardware is damn good for most problems, and you even can do that in Python! (though the shell parser/interpreter was a notable exception to this! This problem is a little more demanding than I thought)

            1. 4

              Every widely used codebase that’s > 100K lines is 10x too slow in some important part, and it’s no longer feasible to optimize

              That’s an interesting idea. I don’t think it’s universally true, but it does highlight the fact that designing to enable large-scale refactoring is probably the most important goal for long-term performance. Unfortunately I don’t think anyone actually knows how to do this. To give a concrete example, LLVM has the notion of function passes. These are transforms that run over a single function at a time. They are useful as an abstraction because they don’t invalidate the analysis results of any other function. At a high level, you might assume that you could then run function passes on all functions in a translation unit at a time. Unfortunately, there are some core elements of the design that make this impossible. The simplest one is that all values, including globals, have a use-def chain and adding (or removing) a use of a global in a function is permitted in a function pass and this would require synchronisation. If you were designing a new IR from scratch then you’d probably try to treat a function or a basic block as an atomic unit and require explicit synchronisation or communication to operate over more than one. LLVM has endured a lot of very invasive refactorings (at the moment, pointers are in the process of losing the pointee type as part of their type, which is a huge change) but the changes required to make it possible to parallelise this aspect of the compiler are too hard. Instead, it’s worked around with things like ThinLTO.

              I think this could explain your pdflatex observations, although I know nothing about that codebase. Basically I am never surprised that when I write something “from scratch” that it is fast (even in plain Python!), simply because it’s 2K or 5K lines of code tuned to the problem, and existing software has grown all sorts of bells and whistles and deoptimizations!

              There are two problems with [La]TeX. The first is that it’s really a programming language with some primitives that do layout. A TeX document is a program that is interpreted one character at a time with an interpreter that looks a lot like a Turing machine consuming its tape. Things like LaTeX and TikZ look like more modern programming or markup languages but they’re implemented entirely on top of this Turing-machine layer and so you can’t change that without breaking the entire ecosystem (and a full TeXLive install is a few GiBs of programs written in this language, so you really don’t want to do that).

              The second is that TeX has amazing backwards compatibility guarantees for the output. You can take a TeX document from 1978 and typeset it with the latest version of TeX and get exactly the same visual output. A lot of the packages that exist have made implicit assumptions based on this guarantee and so even an opt-in change to the layout would break things in unexpected ways.

              Somewhat related to the first point, TeX has a single-pass output concept baked in. Once output has been shipped to the device, it’s gone. SILE can do some impressive things because it treats the output as mutable until the program finishes executing. For example, in TeX, if you want to do a cross-reference to a page that hasn’t been typeset yet then you need to run TeX twice. The first time will emit the page numbers of all of the labels, the second time will insert them into the cross references. This is somewhat problematic because the first pass will put ‘page ?’ in the output and the second might put ‘page 100’ in the output, causing reflow and pushing the reference to a different place. In some cases this may then cause it to be updated to page 99, which would then cause reflow again. This is made worse by some of the packages that do things like ‘on the next page’ or ‘above’ or ‘on page 42 in section 3’ depending on the context and so can cause a lot of reflowing. In SILE, the code that updates these references can see the change to the layout and if it doesn’t reach a fixed point after a certain number of iterations then it can fall back to using a fixed-width representation of the cross-reference or adding a small amount of padding somewhere to prevent reflows.

              1. 1

                … designing to enable large-scale refactoring is probably the most important goal for long-term performance.

                Yes! In the long run, architecture dominates performance. That is one thesis / goal behind Oil’s unusual implementation strategy – i.e. writing it in high level DSLs which translate to C++.

                I’ve been able to refactor ~36K lines of code aggressively over 5 years, and keep working productively in it. I think that would have been impossible with 200K-300K lines of code. In my experience, that’s about the size where code takes on a will of its own :-)

                (Bash is > 140K lines, and Oil implements much of it, and adds a rich language on top, so I think the project could have been 200K-300K lines of C++, if it didn’t fall over before then)

                Another important thesis is that software architecture dominates language design. If you look at what features get added to say Python or Ruby, it’s often what is easy to implement. The Zen of Python even says this, which I quoted here: http://www.oilshell.org/blog/2021/11/recent-progress.html#how-osh-is-implemented-process-tools-and-techniques

                When you add up that effect over 20-30 years, it’s profound!


                The LLVM issues you mention remind me of the talks I watched on MLIR – Lattner listed a bunch of regrets with LLVM that he wants to fix with a new IR. Also I remember him saying a big flaw with Clang is that there is no C++ IR. That is, unlike Swift and the machine learning compiler he worked on at Google, LLVM itself is the Clang IR.

                Also I do recall watching a video about pass reordering, although I don’t remember the details.


                Yes to me it is amazing that TeX has survived for so long, AND that it still has those crazy limitations from hardware that no longer exists! Successful software lasts such a long time.

                TeX and Oil have that in common – they have an unusual “metalanguage”! As I’m sure you know, in TeX it’s WEB and Pascal-H. I linked an informative comment below about that.

                In Oil it’s statically typed Python, ASDL for algebraic types, and regular languages. It used to be called “OPy”, but I might call this collection of DSLs “Pea2” or something.

                So now it seems very natural to mention that I’m trying to fund and hire a compiler engineer to speed up the Oil project:

                https://github.com/oilshell/oil/wiki/Compiler-Engineer-Job (very rough draft)

                (Your original comment about the dynamic parts of Objective C and their speed is very related!)

                What I would like a compiler engineer to do is to rewrite a Python front end in Python, which is just 4K lines of code, but might end up at 8K.

                And then enhance a 3K C++ runtime for garbage collected List<T> and `Dict<K, V>. And debug it! I spent most of my time in the debugger.

                This task is already half done, passing 1131 out of ~1900 spec tests.

                https://www.oilshell.org/release/0.9.6/pub/metrics.wwz/line-counts/for-translation.html

                It seems like you have a lot of relevant expertise and probably know many people who could do this! It’s very much engineering, not research, although it seems to fall outside of what most open source contributors are up for.

                I’m going to publicize this on my blog, but I’m letting people know ahead of time. I know there are many good compiler engineers who don’t read my blog, or who don’t read Hacker News, or who have never written open source (i.e. prefer being paid).

                (To fund this, I applied for a $50K euro grant which I’ll be notified of by February, and I’m setting up Github sponsors. Progress will also be on the blog.)


                Someone replied to me with nice info about TeX metalanguages: https://news.ycombinator.com/item?id=16526151

                Today, major TeX distributions have their own Pascal(WEB)-to-C converters, written specifically for the TeX (and METAFONT) program. For example, TeX Live uses web2c[5], MiKTeX uses its own “C4P”[6], and even the more obscure distributions like KerTeX[7] have their own WEB/Pascal-to-C translators. One interesting project is web2w[8,9], which translates the TeX program from WEB (the Pascal-based literate programming system) to CWEB (the C-based literate programming system).

                The only exception I’m aware of (that does not translate WEB or Pascal to C) is the TeX-GPC distribution [10,11,12], which makes only the changes needed to get the TeX program running with a modern Pascal compiler (GPC, GNU Pascal).

        2. 4

          Windows XP is an example of this, even though we used to make fun of Microsoft for making bloated code.

          It doesn’t surprise me that we’d feel this way now. From memory (I didn’t like XP enough to have played with it in virtualization at any time since Windows software moved on from supporting it) Windows XP was slow for a few reasons:

          1. It included slow features that its predecessor didn’t. Like web rendering on the desktop, indexing for search, additional visual effects in critical paths in the GUI, etc.
          2. It needed a lot more RAM than NT4 or 2000 did. Many orgs had sized their PCs for NT 4 and tried to go straight to XP on the same hardware, and MS had been super conservative about minimum RAM requirements. So systems that met the minimums were miserable.
          3. (related to 2) It had quite a bit more managed code in the desktop environment, which just chewed RAM.

          If you tried to install it on a 16MB or 32MB system that seemed just fine with NT SP6 or 2k, you had a bad time. Now, as you point out, we just toss 256MB at it without thinking. Some of the systems in the field when it was released, that MS told us could run XP, could not take 256MB of RAM.

          1. 2

            I think you’re mis-remembering the memory requirements of 1990s WinNT a little bit. :-)

            I deployed NT 3.1 in production. It just about ran in 16MB, and not well. 32MB was realistic.

            NT 4 was OK in 32MB, decent in 64MB, and the last box I gave to someone had 80MB of RAM and it ran really quite well in that.

            I deployed an officeful of Win2K boxes in 2000 on Athlons with 128MB of RAM, and 6mth later, I had to upgrade them all to 256MB to make it usable. (I was canny; I bought 256MB for half of them, and used the leftover RAM to upgrade the others, to minimise how annoyed my client was at needing to upgrade still-new PCs.)

            XP in 128MB was painful, but it was just about doable in 192MB (the unofficial maxed-out capacity of my Acer-made Thinkpad i1200 series 1163G) and acceptable in 256MB.

            For an experiment, I ran Windows 2000 (no SPs or anything) on a Thinkpad 701C – the famous Butterfly folding-keyboard machine – in 40MB of RAM. On a 486. It was just marginally usable if you were extremely patient: it booted, slowly, it logged in, very slowly, and it could connect to the Internet, extremely slowly.

            1. 2

              I will just believe you… I’m not going to test it :)

              I remember that I had rooms full of PCs that were OK with either NT4 or 2K, and were pretty much unusable on XP despite vendor promises. The fact that I’ve forgotten the exact amounts of RAM where those lines fell is a blessing. I’m astonished but happy that I’ve finally forgotten… it was such a deeply ingrained thing for so long.

              1. 2

                :-D

                That sounds perfectly fair! ;-)

                The thing about RAM usage that surprised me in the early noughties was how much XP grew in its lifetime. When it was new, yeah, 256MB and it ran fairly well. Towards the end of its useful lifetime, you basically had to max out a machine to make it run decently – meaning, as it was effectively 32-bit only, 3 and a half (or so) gigs of RAM.

                One of the things that finally killed XP was that XP64 was a whole different OS (a cut-down version of Windows Server 2003, IIRC) and needed new drivers and so on. So if you wanted good performance, you needed more RAM, and if you needed more than three-and-a-bit gigs of RAM, you had to go to a newer version of Windows to get a proper 64-bit OS.

                For some brave souls that meant Vista (which, like Windows ME, was actually fairly OK after it received a bunch of updates). But for most, it meant Windows 7.

                And that in turn is why XP was such a long-lived OS, as indeed was Win7.

                Parenthetical P.S.: whereas, for comparison, a decent machine for Win7 in 2009 – say a Core i5 with 8GB of RAM – is still a perfectly usable Windows 10 21H2 machine now in 2022. Indeed I bought a couple of Thinkpads of that sort of vintage just a couple of months ago.

          2. 1

            Yeah I think all of that is true (although I don’t remember any managed code.) So I guess my point is that the amount of software bloat is just way worse now, so software with small amounts of bloat like XP seem ultra fast.

            Related thread from a month ago about flatpak on Ubuntu:

            https://lobste.rs/s/ljsx5r/flatpak_is_not_future#c_upxzcl

            One commenter pointed out SSDs, which I agree is a big part of it, but I think we’ve had multiple hardware changes that are orders-of-magnitude increases since then (CPU, memory, network). And ALL of it has been chewed up by software. :-(

            And I don’t think this is an unfair comparison, because Windows XP had networking and a web browser, unlike say comparing to Apple II. It is actually fairly on par with what a Linux desktop provides.

      2. 4

        I cut my teeth on AppleSoft BASIC in the 1980s. The only affordance for “structured programming” was GOSUB and the closest thing there was to an integrated assembler was a readily accessible system monitor where you could manually edit memory. The graphics primitives were extremely limited. (You could enable graphics modes, change colors, toggle pixels, and draw lines IIRC. You might have been able to fill regions, too, but I can’t swear to that.) For rich text, you could change foreground and background color. Various beeps were all you could do for sound, unless you wanted to POKE the hardware directly. If you did that you could do white noise and waveforms too. I don’t have enough time on the CoCo to say so with certainty, but I believe it was closer to the Apple experience than what you describe.

        The thing that I miss about it most, and that I think has been lost to some degree, is that the system booted instantly to a prompt that expected you to program it. You had to do something else to do anything other than program the computer. That said, manually managing line numbers was no picnic. And I’m quite attached to things like visual editing and syntax highlighting these days. And while online help/autocomplete is easier than thumbing through my stack of paper documentation was, I might have learned more, more quickly, from that paper.

        1. 2

          Before Applesoft BASIC there was Integer BASIC, which came with the Mini-Assembler. It was very crappy though, and not a compelling alternative to graph paper and a copy of the instruction set. I remember a book on game programming on the Apple II that spent almost half the book writing an assembler in Applesoft BASIC, just to get to the good part!

          1. 1

            I remember Integer BASIC only because there were a few systems around our school where you needed to type “FP” to get to Applesoft before your programs would work. I don’t remember the Mini-Assembler at all.

        2. 1

          Color BASIC on the CoCo was from Microsoft, and it wasn’t too different from some of the other BASICs of the time, but did require a little adaptation for text-only stuff. Extended Color BASIC (extra cost option early on) added some graphics commands in various graphics modes. With either version of Color BASIC, the only structured programming was via GOSUB. Variable names were limited to one or two letters for floats and for strings.

          Unfortunately, the CoCo didn’t ship with an assember / debugger built-in, you had to separately buy the EDTASM cartridge (or the later floppy disk version).

      3. 4

        My computers are fast moment: I was trying to get better image compression. I’ve discovered that an existing algorithm randomly generated better or worse results depending on hyperparameters, so I’ve just tried a bunch of them in a loop to find the best:

        for(int i=0; i < 100; i++) {
           result = try_with_parameter(i);
           if (result > best) best = result;
        }
        

        And it worked great, still under a second. Then I’ve found the original 1982 paper about this algorithm, where they said their Univesity Mainframe took 2 minutes per try, on way smaller data. Now I know why they hardcoded the parameters instead of finding the best one.

        1. 12

          A lot of the new and exciting work in compilers for the last 20 years has been implementing algorithms that were published in the ‘80s but ignored because they were infeasible on available hardware. When LLVM started doing LTO, folks complained that you couldn’t link a release build of Firefox on a 32-bit machine anymore. Now a typical dev machine for something as big as Firefox has at least 32 GiB of RAM and no one cares (thought they do care that fat LTO is single threaded). The entire C separation of preprocessor, compiler, assembler, and linker exists because each one of those could fit independently in RAM on a PDP-11 (and the separate link step originates because Mary Allen Wilkes didn’t have enough core memory on an IBM 704 to fit a Fortran program and all of the library routines that it might use). Being able to fit an entire program in memory in an intermediate representation over which you could do whole-program analysis was unimaginable.

          TeX has a fantastic dynamic programming algorithm for finding optimal line breaking points in a paragraph. In the paper that presents the algorithm, it explains that it would also be ideal to use the same algorithm for laying out paragraphs on the page but doing this for a large document would require over a megabyte of memory and so is infeasible. SILE does the thing that the TeX authors wished they could do, using the algorithm exactly as they described in the paper.

      4. 2

        RIGHT!? A few years ago, I stumbled upon an old backup CD on which, around 2002 or so, I dumped a bunch of stuff from my older, aging Pentium II’s hard drive. This included a bunch of POV-Ray files that, I think, are from 1999 or so, one of which I distinctly recall taking about two hours to render at a nice resolution (800x600? I don’t think I’d have dared try 1024x768 on that). It was so slow that you could almost see every individual pixel coming up into existence. In a fit of nostalgia I downloaded a more recent version of POV-Ray and after some minor fiddling to get it working with modern POV-Ray versions, I tried to render it at 1024x768. It took a few seconds.

        I was somewhat into 3D modelling at the time but I didn’t have the computer to match. Complicated scenes required some… creative fiddling. I’d do various parts of the scene in Moray 2 (anyone remember that?) in several separate files, so I could render them separately while working on them. That way it didn’t take forever to do a render. I don’t recall why (bugs in Moray? poor import/copy-paste support when working with multiple files?) but I’d then export all of these to POV-Ray, paste them together by hand, and then do a final render.

        I don’t know what to think about language friendliness either, and especially programming environment friendliness. I’m too young for 1987 so I can’t speak for BASIC and the first lines of code I ever wrote were in Borland Pascal, I think. But newer environments weren’t all that bad either. My first real computer-related job had me doing things in Flash, which was at version… 4 or 5, I think, back then? Twenty years later, using three languages (HTML, CSS and JS), I think you can do everything you could do in Flash (a recent-ish development, though – CSS seems to have improved tremendously in the last 5-7 years), but with orders of magnitude more effort, and in a development environment that’s significantly worse in just about every way there is. Thank God that dreadful Flash plugin is dead, but still…

        For a long time I though this was mostly a “by programmers, for programmers” thing – the inevitable march of progress inevitably gave rise to more complex tools, which not everyone could use, so we were generally better off, but non-programmers were not. For example, lots of people at my university lamented the obsolescence of Turbo C – they were electrical engineers who mostly cared about programming insofar as it allowed them to crunch numbers quickly and draw pretty graphics. Modern environments could do a lot more things, but you also paid the price of writing a lot more boilerplate in order to draw a few circles.

        But after I’ve been at it for a while I’m not at all convinced things are quite that simple. For example, lots of popular application development platforms today don’t have a decent GUI builder, or any GUI builder at all, for that matter, and writing GUI applications feels like an odd mixture of “Holy crap the future is amazing!” and “How does this PDP-11 fit in such a small box!?”. Overall I do suppose we’re better off in most ways but there’s been plenty of churn that can’t be described as “progress” no matter how much you play with that word’s slippery definition.

        Edit: on the other hand, there’s a fun thread over at the Retrocomputing SO about how NES games were developed. This is a development kit. Debugging involved quite some creativity. Now you can pause an emulator and poke through memory at will. Holy crap is the future awesome!

        1. 1

          I’ve been thinking about UI builders, and from my experience, I think they’ve fallen out of favor largely because the result is harder to maintain than a “UI as code” approach.

          1. 4

            They haven’t really kept up with the general shift in development and business culture, that’s true. The “UI description in one file, logic in another file, with boilerplate to bind them” paradigm didn’t make things particularly easy to maintain, but it was also far more tolerable at a time when shifting things around in the UI was considered a pretty bad idea rather than an opportunity for an update (and beefing up some KPIs and so on).

            A great deal of usefulness at other development stages has been lost though. At one point we used to be able to literally sit in the same room as the UX folks (most of whom had formal, serious HCI education but that’s a whole other can of worms…) and hash out the user interfaces based on the first draft of a design. I don’t mean new wireframes or basic prototypes, I mean the actual UI. The feedback loop for many UI change proposals was on the order of minutes, and teaching people who weren’t coders how to try them out themselves essentially involved teaching them what flexible layouts are and how to drag’n’drop controls, as opposed to a myriad CSS and JS hacks. For a variety of reasons (including technology) interfaces are cheaper to design and implement today, but the whole process is a lot slower in my experience.

    2. 1

      Does anybody know what that professor’s company made?

    3. 1

      I got my start in programming about 10 years after Ovid but I did have one very similar experience. I had done some BASIC out of books starting from around 1996, but the year I took my first actual programming course (1998), the school had just upgraded the campus-wide network from Token Ring to Ethernet… and for the first 6 or 8 weeks of the school year, none of the computers worked.

      So we learned the language, we learned to “run” our programs with pen and paper by tracing through the logic and marking down the variable values, and we turned in handwritten assignments (nothing over 1 page). If you ask me it was pretty beneficial.