1. 30
  1.  

  2. 4

    Really nice write-up. Although for a home backup server, my favorite low-cost fanless solution is still a $35 Rasberry Pi connected to an external disk or two. No, it isn’t fast. But it doesn’t have to be since backups are ideally done in the middle of the night while I’m sleeping.

    The TCO calculations are interesting but I worry that they might be oversimplified. For starters, the assumption is that you’re storing 5 TB of data from beginning to end over a 5-year period, which almost certainly won’t be the case in the real world. If you start with 2 TB and grow to 5 TB, that drastically changes the numbers since with cloud storage, you generally only pay for what you use when you use it. But those two 5 TB disks cost money up front and wasted space is essentially wasted money, from a certain perspective.

    The cloud-based storage gets you three things:

    • When a cloud storage physical disk fails and needs to be replaced, you don’t pay for it, you don’t replace it, you don’t even know about it.
    • If your storage exceeds 5 TB before the 5-year mark, now you need new disks, which costs time and money and gets you a little closer to the TCO of the cheapest cloud storage.
    • The cost of cloud services generally goes down over time as hardware gets cheaper and cloud providers get more competitive, although the downward trend has not been as aggressive recently.

    These are all great things for enterprises and perhaps even small businesses but I’m in general agreement that a cheap local box is still a better choice for the frugal and technically-minded. (Just don’t forget that you still need off-site backups one way or another!)

    1. 4

      Thank you.

      Initially I also use Raspberry Pi 2B with 2TB 2.5 USB drive as I described in the earlier ‘episode’ - https://vermaden.wordpress.com/2018/08/28/silent-fanless-freebsd-server-diy-backup/ - but with 3TB drive the Raspberry Pi was not able to always properly power on and with 4TB USB 3.0 drive these problem increased even more - not to mention 4.5MB/s of read/write on GELI encrypted ZFS pool :)

      About TCO calculation - yes and no - I generally agree but I already have 3TB of data and bought larger disks to not have to upgrade in nearest 3-5 years - You can cut 1/3 from the 5TB cloud costs and the conclusion will be the same - for home/personal storage its cheaper to go DIY way. I also added one drive failure/replace cost in the calculations.

      I underline home/personal here because it was not my intention to compare to enterprise solutions with redundancy - just to show what can you achieve and for how much by doing two DIY boxes - one for in-house backup and one for off-site backup (from this and earlier ‘episode’).

      If that is not paranoid - then I also have one off-line backup that I carry with me and update it from time to time :)

      Regards, vermaden

    2. 3

      I read this whole post scratching my head about how to fit two 3.5 inch 5TB disks in such a small enclosure. Only on the second read did I realise there are 5TB 2.5” “BarraCuda” disks now. Mind blown!

      How closely packed are the disks, and how hot do they run when they’re writing a lot of data? I just had to replace a fairly young backup drive in a dual fanless USB 3.0 enclosure and I think heat probably killed it. I want to move to something with a fan now (not constantly on, but something that can spin up when the drives are working hard.)

      1. 3

        Here is the picture how they are mounted: https://vermaden.files.wordpress.com/2019/04/disks-mounted.jpg

        These are the temperatures while sending files from backup to offsite.

        Disks.

        # smartctl -a /dev/ada0 | grep -i temperature
        190 Airflow_Temperature_Cel 0x0022   062   038   040    Old_age   Always   In_the_past 38 (Min/Max 23/58 #45)
        194 Temperature_Celsius     0x0022   038   062   000    Old_age   Always       -       38 (0 18 0 0 0)
        
        # smartctl -a /dev/ada1 | grep -i temperature
        190 Airflow_Temperature_Cel 0x0022   062   039   040    Old_age   Always   In_the_past 38 (Min/Max 23/57 #2)
        194 Temperature_Celsius     0x0022   038   061   000    Old_age   Always       -       38 (0 20 0 0 0)
        

        System.

        # kldload coretemp
        # sysctl -a | grep -i temperature
        hw.acpi.thermal.tz0.temperature: 47.1C
        dev.cpu.1.temperature: 50.0C
        dev.cpu.0.temperature: 48.0C
        

        I started zpool scrub process to ‘kill’ the disks more and their temp went from 38 to 40.

        Hope that helps :)

        1. 2

          Wow, thanks heaps for that.

          I realised that I’d assumed “temperature killed the disk” until now so I did a bit more reading as well… I’m probably way off!

          Google’s data is that for server drives, hotter is fine and cooler is significantly worse: https://research.google.com/archive/disk_failures.pdf

          May be that because this drive bank was switched off a lot of the time, it was thermal cycling (or something unrelated) that killed the disk. It got very hot towards the end, but this was probably a symptom not a cause. Perhaps keeping drives consistently warm in a small, tight, always-on enclosure is much better!

          1. 3

            Welcome :)

            Thanks for the Google document - they also state there that other studies chowed similar results - that higher (but reasonable) temperatures does not automatically mean more drive failures - good to hear that.

            Also good analysis from Backblaze team: https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/

            If one of the drive’s will fail on me then I will update the blog entry - and maybe invest $3 for a 6x10 fan :)

      2. 2

        Hoping this doesn’t spark a huge debate, but any concerns about using non-ECC ram for your ZFS setup? Is it less of a concern considering it’s just a mirrored-drive configuration?

        I did check and read your earlier post https://vermaden.wordpress.com/2018/06/07/silent-fanless-freebsd-desktop-server/ where you discuss this.

        1. 3

          I read extensively about the topic when I implemented a similar ZFS mirror for home use, and my conclusion then was basically that almost everyone recommending against ZFS on non-ECC memory used rhetoric that sounded a lot like FUD. The few people that were able to explain the issue in technical detail seemed to be of the opinion that “yes, non-ECC memory is bad. But it’s bad regardless of what file system you use, and even worse with non-ZFS filesystems.”

          1. 2

            any concerns about using non-ECC ram for your ZFS setup

            This is a perpetuated myth that somehow refuses to die. From one of the authors of ZFS:

            There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

            Source: https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=26303271#p26303271

            1. 2

              ECC RAM helps ZFS the same way as any other filesystem. I am not sure why such urban legend arose that you can get away with UFS/EXT4/XFS/… on non-ECC RAM and while using ZFS on non-ECC RAM will mean trouble.

              Its the same for all filesystems, unreliable RAM means trouble - if you want to be sure and can afford motherboard/platform/RAM that supports ECC RAM then do it without any doubt but after using ZFS for years without ECC RAM (because of costs) I did not experienced any problems. Also when you have ZFS mirror or RAIDZ then this problem will be detected.

              A lot of people write that price difference between ECC memory and non-ECC memory is that small then there is no point in doing non-ECC memory setups … but its not the whole truth.

              To support ECC memory on the Intel CPU you will need Intel Xeon CPU or ‘server oriented’ Atom CPU. None of the Celeron/Pentium/Core lines support ECC RAM, only Xeon does. Besides 3-5 times more expensive CPU you will end with 2-3 times more expensive motherboard while the increased costs of ECC RAM sticks will be minimal.

              Its better in the AMD world where ALL AMD CPUs support ECC memory - but you still need to get motherboard that supports that ECC memory - still increased costs besides just ECC RAM sticks.

              Even from my blog post - https://vermaden.wordpress.com/2018/06/07/silent-fanless-freebsd-desktop-server/ the difference in motherboard cost to support ECC RAM is 6 times! I was not able to find cheaper Mini ITX motherboard with ECC RAM support that will also have very low - less then 15W - TDP.

               $49  non-ECC  ASRock J3355B-ITX 
              $290  ECC      ASRock C2550D4I
              
              1. 2

                Not only is the ram more expensive, the QVL (qualified vendor list) for all the motherboards I looked at only listed a single ECC SKU on their support charts (alongside ~100+ non-ECC SKUs).

                Good luck finding the right one in stock nearby - and finding an alternative is hard, as ECC has several variants and “ECC support” on a motherboard means it supports at least one of them (but they don’t tell you which).