1. 16
  1.  

  2. 2

    BTW, if anyone’s interested in IBM mainframes, I’d strongly recommend looking at i (AS/400) instead. It has the mainframe aesthetics, but a far more interesting (@nickpsecurity has written about its security architecture a lot) and usable (have you seen ISPF or JCL?) OS and the hardware is far easier to find (as far as big iron goes, anyways) and manage (usually they’re 4U rackmounts or large towers, not a full rack requiring 400V power).

    1. 1

      @ all

      Yes, that system was way ahead of its time with capability-architecture, integrated data store that managed itself a bit (idk how much), and OS software compiled to 64-bit bytecode of sorts that made it future-proof during hardware changes. Also, an ultra-reliable system, often described as “tanks,” that didn’t cost millions of dollars like mainframes. Microsoft got busted out for running their entire business on one when they were telling everyone else to use Windows Server. It took 20+ Windows Servers to replace it after the bad publicity. The security architecture is here under System/38 for anyone interested. It’s been steadily updated with current one using virtualization to run those apps side-by-side with Linux or whatever it supports.

      @ calvin and other i administrators

      I’m still collecting info from people about day-to-day administration. One thing some tell me is that they are or can be setup where it detects problems (esp hardware) with IBM technicians just showing up to replace it before it breaks applications. Do you have any data on that or other things that reduce administrative burden?

      I’ve also speculated that they might be so predictable and easy to administer that one could find lots of time to look up great submissions for their favorite social media sites while others were fire-fighting their Windows and Nix operations. I don’t expect an answer on that since it could be a threat to i admins’ job security or at least potentially-relaxed workflow. Any of you can feel free to private message me on that.

    2. 1

      Does anyone know why exactly mainframes need to be so physically large? Or what kind of commodity PC hardware 1 mainframe is equivalent to (in terms of compute power/memory/non-volatile storage/some other metric)? From the picture in the article it looks like the mainframe is roughly as big as two 40U server racks stuck together, and indeed it looks like the inside has rails that something like rack-mounted equipment slots into. Is a mainframe roughly equivalent to two 40U racks worth of servers?

      1. 3

        There’s some information / speculation in the corresponding HN thread - https://news.ycombinator.com/item?id=20466488

        1. 2

          All the resilient, vertically-scaled machines are large to allow for all the CPU’s, I/O, and storage you might put in them. You literally cram them into small spaces so their ultra-high-speed, low-latency buses can work right. Least, that’s how it used to be. The first I saw was an Origin which looked awesome in some models. They’re not mainframes but similar reasons for that design.

          The mainframes have a lot of components, many are redundant, the CPU’s are optimized for throughput rather than energy efficiency, the system itself is more optimized for I/O, and the Channel I/O helps it stay way closer to 100% utilization than most PC’s/servers. These things are workhorses that have to process insane amounts of transactions or business operations with near zero downtime. Redundancies, error checking, and monitoring built into about everything. There could also be legacy reasons for size like being compatible with some part(s). I just know balancing I/O throughput, latency, and redundancy by themselves add components that are physically close to each other.

        2. 1

          That’s truly admirable dedication to a favorite platform.

          All while I’m trying to justify buying a raspberrypi to myself and always find myself thinking I don’t need it because I’m not going to use it much.