1. 34

  2. 5

    He’s running forty Blades in 2U. That’s:

    160 ARM cores 320 GB of RAM (up to) 320 terabytes of flash storage

    …in 2U of rackspace.

    OK. I can order a SuperMicro 2U box with 168 AMD Zen4 cores, 384GB RAM, 320TB of NVMe, and a 100Gb/s ethernet interface. Both of these configurations are equally out of reach for a normal “homelab” user, because the cost of the disks alone is $36,000 and up.

    The problems with fleets of tiny machines include:

    • cabling
    • overhead from running N system images
    • configuration for N machines
    • cooling
    • power supplies
    • networking overhead
    • assembly and maintenance

    Somewhere around the fourth Raspberry Pi in the cluster, you should be realizing that one DIY desktop-class machine of about the same cost would be twice as fast, subdivisible rather than aggregatable, and overall be less fuss and bother. The fact that you can do it with lots of semi-unobtainable tiny cheap machines does not mean it makes sense to do so.

    Of course, if the fuss and bother are what you are looking for, this sort of thing is perfect.

    1. 4

      You are correct! But if you’re doing things like testing on ARM cores anyway, and if you are already configuring N machines where N is pretty large, then it’s not the most unreasonable thing ever. The blade form factor is explicitly there to reduce some of the overhead. Cabling (partially; I like the PoE approach but it’d be nice if the chassis had a switch built in), cooling, and assembly are the main ones.

      Definitely niche, and they don’t pretend otherwise. But it’s a pretty cool niche.

    2. 5

      God I would really love to have a few compute blades for this, but I would have absolutely no use for them lol