1. 17
  1.  

  2. 5

    While I absolutely agree that too many people jump to ‘buy a used 2-socket Xeon server to host my blog with a dozen views per month’, there’s a middle ground between an R710 and ARM SBCs. For example, I run Funkwhale on an Intel Celeron NUC. According to AnandTech, power consumption ranged between ~9w (idle) and 25w (maxed out). Assuming that the machine runs at full load 50% of the time, this would consume around 150 kWh per year, costing me around $25/year in energy. And it’s silent!

    Being able to plug machines in and have it autoscale seems nifty though. Also I don’t know much about ARM SOC efficiency, maybe they’re more efficient per watt than x86 for these loads.

    1. 3

      Agreed, instead of going for a full populated server and let the 72GB RAM burn electricity, most self hosting can be done with 16GB of ram and consumer level hardware. X86 is still way more powerful compared to arm, in terms of instruction efficiency and io.

      1. 1

        X86 is still way more powerful compared to arm, in terms of instruction efficiency and io

        I/O is not ISA dependent, and x86 (amd64 really) is not that efficient, in fact it wastes power on the rather complex variable-length instruction decoding.

        Cheap amd64 stuff is more powerful than cheap aarch64 stuff, sure. And amd64 is ahead in SIMD deployment — you can get your hands on AVX-512 supporting processors now quite easily, while SVE right now is only implemented in the Fujitsu supercomputer you don’t have access to.

        But general performance is all about the microarchitecture. Cortex-A72 performs pretty well for a relatively low power core designed in 2014. Newer stuff like Apple Vortex, Nvidia Carmel, Broadcom→Cavium Vulcan (ThunderX2) is very impressive.

        1. 1

          That’s probably true, and you also stated that powerful x86 processors are more available (in terms of cost and quantity) in general. The IO issue I am complaining is similar, where you can find tons of cheap motherboards has pcie, sata and USB 3. But it’s still not available on consumer level arm boards.

      2. 2

        Good point. Regular x86 servers will undoubtedly be faster and can be relatively cheap too (J3455 vs S922X). I think the real win for ARM is that it’s cheaper to get started and scales pretty well. The barebones kit of the NUC you linked is $130, with real working builds at $250 and higher. The S922X uses less power, is > 75% as fast (according to that random benchmark at least), and costs $90 (~$120 for a fully working system).

        Being able to plug machines in and have it autoscale seems nifty though.

        To be fair, you can totally do this with x86 as well, but people don’t generally buy that many servers for their house :P

      3. 3

        I am currently building an ARM cluster of my own, though for slightly different purposes than self-hosting. If you want more hardware to expend your research with, I would suggest you take a look at http://wordpress.supersafesafe.com/clusterboard (that URL though..). It is a bit of a gamble, though a cheap one, as the packaging and shipping was some of the worst I have ever seen.

        The hope was getting the nodes to work from ramdisk only and netboot alpine linux, but struggling with uboot for the time being.

        1. 1

          What is your cluster for? I am toying with building a cluster myself with zfs but I have not found a good solution on how to attach a bunch of harddrives to the sbcs. Mostly they have a USB ports to attach drives but I neither have money nor time to test this approach.

          1. 2

            a few projects, mainly fuzzing for vulnerabilities. The perhaps more interesting experiment is “volatile single-use, throwaway” thin clients, be it android apps or mapping browser “tabs” on my desktop to short-lived “one device, one page, kill and reboot on close” chrome instances.

        2. 2

          I’m assuming he means AMD64, not the old Titanium x64 ;)

          Although OP is no doubt happy with their rube goldberg setup, all of the arguments against an AMD64 box fall apart in light of modern intel-based SBCs like the lattepandas. As /u/whjms pointed out, NUC power consumption can be pretty low too. My i5 NUC is whisper quiet and low power.

          The key problem with ARM SBCs isn’t CPU grunt, but RAM. That’s getting better with things like Jetson TX2 but they’re crazy expensive for normal use.

          Don’t get me wrong, it’s a good post on how to set up a decent ARM cluster though.

          1. 3

            When you start talking about price vs performance and power utilization for a single box, you’re going to do a lot better with something like a NUC, or an el-cheapo ATX motherboard and case, some Chinese x86-64 box, or even a 5-year-old laptop with its lid closed.

            But when you specifically want a cluster for whatever those reasons might be (redundancy, scalability, science), ARM starts to make more sense since the SBCs for ARM are small, cheap, and power-efficient. Between 2-8 GB of RAM and 4 CPU cores per cluster node is pretty usable for a lot of applications.

            There are small-ish x86 SBCs but they usually don’t compete with ARM in terms of price and power efficiency. The closest one I’ve seen so far is the recently-released Atomic Pi for $35. The price point is right, but it takes about twice as much power as a similar ARM SBC, from what I can tell. (There is also some speculation that all of these Atomic Pi units came from a manufacturing run ordered by a major car company which then cancelled the product. Which means that once these were bought at auction for a song and once sold out, there probably won’t be any more.)

            1. 2

              I assume you mean Itanium not Titanium.

              ARM SBC still use many times less power, costs many times less than NUC, take up less physical space and generate less heat. It just depends on what your use case requires. Intel NUC are an absolutely wonderful option as well, but so is an SBC cluster, depending on what you’re doing with it.

              1. 1

                I assume you mean Itanium not Titanium.

                Yup, thanks for the correction or I’d spend the rest of my life calling it Titanium. I haven’t seen one in a very long time…

                1. 4

                  we used to call it the Itanic.

              2. 1

                The key problem with ARM SBCs isn’t CPU grunt, but RAM

                ROCK64 and ROCKPro64 come with up to 4GB of DDR4. I don’t think they’re dual channel though :(

                1. 1

                  The board in the original post (Odroid N2) also has 4GB DDR4. Some SBCs even have regular dual-channel DDR4 (laptop) RAM slots (Odroid H2).

              3. 1

                Anyone know if the Pi Zero Cluster boards were ever made available to the larger world?

                https://www.element14.com/community/message/171959/l/pi-zero-cluster#171959

                Seems like a lot of talk and promise, and then nothing.

                1. 1

                  I’ve had success with this.

                  I use it to calculate Pi on Pi-day, and do some messing around with MPI.

                  1. 1

                    What you linked to looks like an unofficial project to me. The guy who made it runs a company that does stuff with SBCs; it might even just be an internal thing that they use for whatever reason.

                    1. 1

                      yeah that is what I was afraid it was.