1. 16
  1.  

  2. 4

    This brings back fond memories of when I rescued an old AlphaStation from a customer’s recycling bin and installed OpenBSD on it.

    I do agree about the noise however, the thing was like a jet engine.

    1. 4

      At 266MHz, it’s a pretty slow Alpha - the later ones were even louder. There are lots of things I like about the Alpha but it was very much a system designed by hardware architects who didn’t ever talk to software people. The memory model was very exciting. One of the things that I hate the most about working with Linux is that the kernel memory model is defined in terms of the barriers that Alpha needs, not in terms of release / acquire semantics on objects (as C[++]11 did and the FreeBSD kernel adopted). Most operating systems have given up supporting SMP on Alpha because the memory model imposes a huge maintenance burden on everything else. When WG21 was working on atomics for C++11, compatibility with Alpha was an explicit non-goal.

      Floating point on the Alpha was very fast, at the expense of basically no synchronisation between integer and floating point pipelines. This made a bunch of things interesting, the most significant one was floating point exceptions. If you wanted to avoid paying around a 10x performance penalty, you needed to put floating point exceptions in imprecise mode and you’d then end up with an exception happening at a point in your program that was completely unrelated to the instruction that actually caused the fault. Floating point exceptions were basically a program-terminating event.

      Alpha also took RISC a bit beyond its logical conclusion. I suspect Alpha is the reason for things like int_fast8_t in C99. The Alpha had only 32- and 64-bit loads and stores. If you wanted to do a narrower load, you had to do a load and a shift / mask. A narrower store ended up being a read-modify-write.

      Everything was designed so that the hardware could retire insane numbers of instructions per second but the Alpha needed to be able to do this to compensate for the comparatively high dynamic instruction count compared to other architectures.

      1. 3

        “but it was very much a system designed by hardware architects who didn’t ever talk to software people. “

        I can’t speak to that. I do think the PALcode mechanism on Alpha’s was great for software people. Many things people want to do in OS’s, VM’s, transactions, etc could benefit from that today.

        One of the things I wanted someone to do with RISC-V CPU was to either implement PALcode-like feature or microcode one with HLL-to-microcode compiler (example pdf). If RaspPi-like board existed, hobbyists could build all kinds of experimental kernels and language runtimes on it with few to zero issues from abstraction gaps. A commercial, embedded offering might allow customization for competitive differentiation in performance, determinism, debuggability, or security of various platforms. A server version might do the same. Embedded providers customize more, though, with server-side adoption likely limited by the performance gap with existing, full-custom designs.

        1. 2

          I can’t speak to that. I do think the PALcode mechanism on Alpha’s was great for software people. Many things people want to do in OS’s, VM’s, transactions, etc could benefit from that today.

          I’m a huge fan of PALcode, though it was more useful on single-core systems than with SMP. You could do things like atomic append to queue in PALcode trivially on a single processor because the PALcode ran with interrupts disabled. On an SMP machine, you needed to acquire locks or build a lock-free data structure (very hard with Alpha’s memory model) and at that point you may as well implement it in userspace / kernelspace.

          One of the things I wanted someone to do with RISC-V CPU was to either implement PALcode-like feature or microcode one with HLL-to-microcode compiler

          PALcode on RISC-V is called ‘Machine Mode (M-Mode)’. On Arm it’s called Exception Level 3. Good ideas never completely die.

          That said, I would be interested in playing with a RISC-V version of Alto microcode, where each ‘instruction’ is just a byte that indexes into a table of microcode instructions so that you effectively have hardware assist for writing jump-threaded interpreters. You could probably make that quite fast today and I’d be really curious whether it would outperform an optimised bytecode interpreter.

        2. 2

          I honestly had no idea about any of this. I rescued it from the dumpster, hauled it to work(!), stuck it in a closet with a network connection and SSHd into it.

          It was mostly, “hey, free hardware!” with the added frisson of |D|E|C| hardware quality (I found a service manual which had hand-draughted illustrations) and exotic stuff like ECC memory. I didn’t know what I was doing, I wedged the entire system trying to do a make world and getting Perl dependencies wrong.

          1. 2

            I remember buying Computer Shopper in the ’90s and seeing adverts for 266MHz Alphas from a PC manufacturer in the inside cover. At the time, the fastest Pentium you could buy was 100MHz (and that seemed insanely fast). The Pentium Pro line at up to 200MHz came out at about that time, but then the Alpha workstations were at 500MHz. Even running emulated 32-bit x86 code, the Alphas with Windows NT were about as fast as the fastest Intel CPUs and running native code they were insanely fast. I really wanted one.

            This was pretty much the peak for the Alpha. As SMP and multithreading became more important, it struggled a bit. SMP with multiple single-threaded processes was more-or-less fine, the incredibly weak memory model wasn’t a problem when you had no shared memory between CPUs but it was a problem with multithreaded programs: things written for x86 (or even something like SPARC, which had a slightly less weak memory model) often didn’t work correctly on Alpha.

            Digital largely discontinued Alpha development to focus on Itanium and by the time the Pentium III passed 1GHz the Alphas weren’t really performance kings anymore. Itanium didn’t live up to its promise and there’s been a lot of speculation about what would have happened if Digital had pushed ahead with the EV8 Alphas instead. My guess is that Alpha would still be a dead architecture today, just one that died a few years later, but it’s also possible that they’d have extended the instruction set to support narrower loads and stores and made the memory model a bit stronger. It’s not clear whether it would have remained competitive. Around 2000 there was a shift back towards richer instructions sets: even Arm now tries not to describe itself as a RISC architecture, preferring the term ‘load-store architecture’. There’s a lot of benefit (in terms of store buffer / register rename complexity) in separating loads and stores from other operations but the amount of work you can do in a single pipelined instruction became much greater. I have no idea what an Alpha competitor to Arm’s Scalable Vector Extensions would have looked like and that’s the kind of thing that gives impressive performance wins today.

      2. 2

        To be honest, you aren’t going to have a fun time by treating an old machine running an OS too big for it as a modern machine. “I can’t browse the web” is completely expected and not terribly interesting.

        I think vintage machines are more interesting running their native software. Why bother having a slow OpenBSD box that barely anything works on, when you can run VMS instead?

        1. 3

          It depends on your goal for these vintage machines. If the goal is to have fun with it every few months/years and keep it in storage between the occasional play session, I agree with you that the original software is the most fun. If your goal is to see how far you can push a machine as a programmer, using newer software can expand the machine’s usefulness drastically. For a late-90s example, Linux on sparc64 is a supported LLVM target whereas a period correct version of Solaris may not come with a useful C compiler. If your goal is to see how much useful work this machine can do as a networked server, running newer software will avoid the risk of exposing easily exploitable vintage software while giving you a larger selection of software packages to choose from.

          A machine of this age can’t live up to our expectations for a modern machine, but it doesn’t need to meet those expectations to still be useful. “What can this machine still do well?” is an interesting question in itself.