1. 12
  1.  

  2. 8

    That article was terrible, full of annoying author opinions and basically no content other than “Intel is no longer shipping the Itanium processor”

    1. 7

      It is the sendoff Itanium deserves.

      Raymond Chen’s articles are maybe more technically interesting.

      1. 3

        Itanium spawns a lot of revisionist history because people falsely blame it for killing the things they loved (RISC workstations) that economics were coming for anyways. Why do you think they all wanted Itanium?

      2. 6

        My favorite part of it was its security technology (pdf) that Secure64 built a DNS product on. That PDF is an advertisement but describes its mechanisms. With all the papers I’ve submitted, I imagine we could’ve seen a lot of interesting stuff if some of those designs built on Itanium’s hardware support. Esp to hardware-accelerate or further isolate specific parts of them. To get readers thinking, an example would be how Code-Pointer Integrity cleverly used segments to do an important part of its design at high speed and isolation.

        Another benefit of RISCy architectures is that they’re not inherently tied to how C’s abstract machine works. In theory, you have enough registers to keep a large pile of internal state for alternative designs. Most x86 alternatives weren’t as fast and cheap as x86, though. Whereas, Itanium was at least really fast with more memory addressing. There was a lot of potential for alternative paradigms to take off by targeting it so long as enterprises could buy them (esp as appliances). That Secure64 did a good job trying to do that is why I mention them in Itanium discussions.

        One immediate use it had was in NUMA machines. SGI’s Altix used them. Its Prism variant was the system gamers dreamed about having (Onyx2 on MIPS before that). You can scale up critical workloads without the programming headaches of using a cluster. It was just multithreading with some consideration applied to data locality.

        One more benefit was reliability technology. Quite a few failures in mission-critical systems come from hardware issues developers can’t see. Even in clusters, making the individual chips more reliable reduces odds of failures in them taking down the whole thing. Especially when they use the same CPU. The Itanium had improvements in this area, I think NUMA/mainframe vendors (eg Clearpath) took advantage of that, and eventually Xeons got most or all of those features. For a while, Itanium-based designs might have crashed less.

        Unfortunately, most potential adopters just asked, “Can I compile my existing code on my existing system to get it to go much faster with 64-bit support?” Answer was no. Then, AMD won by focusing on and accelerating legacy code on an ISA compiler writers already optimized their stuff for, too.

        This was Intel’s third attempt, after i432 and i960 (awesome), to get people off legacy. They lost billions of dollars in ambitious attempts. Today, people gripe that Intel architecture has all kinds of security issues. They should’ve rewritten or patched the architecture. If anything, Intel took more risks than everyone with losses so high I’d be surprised if they take any more.

        1. 2

          Unfortunately, most potential adopters just asked, “Can I compile my existing code on my existing system to get it to go much faster with 64-bit support?” Answer was no. Then, AMD won by focusing on and accelerating legacy code on an ISA compiler writers already optimized their stuff for, too.

          Similar story with HTM.

          1. 1

            Azul was great, too! Seemed like they were beating the averages. Then, they ditched IIRC the hardware product. I might watch that.