1. 10

  2. 10

    So Intel promised that there would be a magic x86 part glued on the side of the first few generations of Itaniums that would run all of your existing important programs.

    Not quite. They originally promised that Itanium was carefully designed so that you could emulate x86 in software and it would run faster than on any x86 chips. This didn’t seem implausible. The FX32! emulator on Alpha ran some x86 workloads faster than any chip Intel sold and DynamoRIO had just shown that a MIPS-on-MIPS emulator doing dynamic optimisation outperformed native MIPS by 10-20%, so the idea that you could build an intrinsically more efficient core and use binary translation to run faster than a real x86 chip seemed quite plausible. Some years later, nVidia built on the work from TransMeta and showed that a VLIW pipeline doing binary translation from AArch64 could outperform a lot of AArch64 cores, so it still seems quite a realistic claim in the abstract, it’s just that Itanium really wasn’t the architecture to deliver it.

    It wasn’t until the second generation that Intel started bundling an x86 core with the Itaniums. This pushed the cost up, which made adopting Itanium even more expensive. As I recall, it was also a case of terrible timing because the x86 core that they included wasn’t the fastest and the software emulation improved to the point where it wouldn’t beat the fastest x86, but it could beat the x86 core on the Itanium die. This has also been repeated a few times: some modern Arm implementations include a 32-bit compatibility layer in hardware that runs slower than a modern JIT-based emulator running AArch32 binaries.

    I also wouldn’t underestimate The Register’s part in this. Every story that they ran about Itanium referred to it as Itanic. Even before it shipped, they’d helped to establish it as a failure in the minds of a lot of tech buyers.

    Even without AMD, it’s not clear to me that Itanium would have been a success. It did what it was intended to do: it killed off PA-RISC and Alpha and mostly killed SPARC. PowerPC limped on a lot longer. It didn’t do this by being better than them, it did so by Intel telling everyone it would be better and having other companies either give up on trying to compete with it or by investing so much to ensure that they were faster than Itanium that their unit price went up and their sales tanked.

    If AMD hadn’t existed, we might all be using Arm or MIPS machines now. Both ISAs were primarily playing at the low end and were kept out of the desktop market by Intel’s x86 investment. With that being diverted to Itanium, the 32-bit desktop market would have been vulnerable to low-cost Arm cores with better performance than x86 and much lower cost than Itanium. Microsoft had Arm and MIPS ports and you can bet that they’d have invested a lot in making them attractive if the high cost of Itanium had threatened new computer sales (where most of the Windows revenue came from).

    1. 3

      I think the fundamental business error (the technical errors are well covered) is that Intel thought it was wise to try and stratify ISAs across business categories. If they had been all in on IA64, a lot of technical decisions would have to have been made that would align the project more toward reality. Conversely, if they had been all on on x86 they might have been the ones to define x86-64. Chasing the great whale of mainframe-like systems allowed a lot of things to go unchecked.

      1. 2

        I don’t see how it killed off SPARC in the slightest. SPARC’s competitiveness peaked well after IPF was dead; M7 and M8 were the most-competitive SPARCs in decades (albeit very expensive.) SPARC was also worse than IPF during the period IPF was relevant - Sun made a series of baffling internal decisions (dragging their feet on an OoO core, then cancelling UltraSPARC V in favor of IP they got from Afara which was poorly suited to what their userbase were doing) that left them to depend on rebadging Fujitsu SPARC64 for most of the product line. IPF tended to basically match SPARC64 on commercial workloads and beat it, often by a lot, on technical workloads. Power was massively ahead of both (but expensive and very thirsty.)

        The argument that IPF “killed PA-RISC” in some kind of anti-competitive play is bizarre to me. The IPF ISA was basically complete at HP - as PA-WideWord - before Intel even signed on. Intel killed a much more normal-looking internal 64b RISC project (IAX) in favor of joining up with HP’s black magic. Intel’s contributions were primarily related to x86 compatibility and to firmware (most of the impetus for EFI came from Intel.) The Itanium 2 microarchitecture, which was what IPF used for its entire relevant life, came from HP Fort Collins, and pressure from HP was involved in killing off a competing IPF core project from the ex-Compaq Intel team in Hudson.

        This was HP’s circus and HP’s monkeys pretty much from day one.

        Also, Merced - the first generation - most assuredly had x86 hardware emulation onboard. It wasn’t very good. By Montecito, it was pulled in favor of IA-32 EL (which was considerably faster, and for “run the innermost loop a gajillion times” type workloads actually did rather well.)

      2. 3

        It felt like a stunning reversal of fortune at the time, for Intel to start manufacturing chips compatible with AMD’s architecture. I always wondered how the intellectual property situation worked there; I know that Intel was forced to license some of the earlier x86 chips to AMD, did they have to pay AMD for amd64?

        The Itanium has always occupied a special place in my heart, because knowing what it was helped me win a match for my high school academic team many years ago. I’ve never had the opportunity to actually touch or log into an Itanium system, though.

        1. 3

          did they have to pay AMD for amd64

          There was some licensing agreement. I don’t know if it involved payments or just mutual IP licensing, though — someone else may have better knowledge or memory than me in that area.

          I’ve never had the opportunity to actually touch or log into an Itanium system

          I had an account on the now-defunct Deathrow Cluster, a hobbyist-run heterogenous cluster of OpenVMS systems. One node was Itanium, other nodes were older Alpha machines. Sadly, that cluster had to shut down because those machines were incredibly power-hungry and it was difficult to cover the electricity cost.

          I hope that OpenVMS on x86-64 will make a DeathRow Cluster 2 possible, but alas, I missed my opportunity to mess with Itanium as I was more interested in OpenVMS and didn’t care much what it was running on.

          Maybe someone will eventually add full support for IA64 to QEMU or Bochs. You can certainly still find an Itanium machine, but it would be a huge waste of power to run.

        2. 1

          Itanium was a disaster then it comes to memory consumption.

          When you used about 1GB RAM on PA-RISC system you needed at least 2GB RAM for the same workload and application on Itanium … and RAM on Itanium was expensive as fvck.

          With always ‘later/older’ production process (nm) and larger power consumption - TDP (mostly because of later/older production process) - and that increased RAM usage over PA-RISC - I really doubt that Itanium chips were EVER competitive (or better) against any other CPUs available on the market for each Itanium release. Think Alpha or SPARC or AMD64 architecture chips …

          I also really hated HP approach when it comes to HP-UX licensing. For example - if you but RHEL from Red Hat you get all the features of LVM and when you buy High Availability (cluster) from Red Hat you get all the features it supports. The only difference are support plans like 5/9 vs 24/7 for example.

          In HP-UX world you needed to pay separately for example for things like … live disk/partition/filesystem resize/extend. Same for a lot of other - quite IMHO standard features.

          … not to mention how retarded/crippled/outdated the HP-UX tools are - you can print/show sizes in bytes or kilobytes. Thats it. Maybe also sometimes in blocks.

          1. 1

            You only need to pay separately if you’re buying Base OE. The LVM features, which HP calls OnlineJFS, are included in all other OE’s out of the box.