1. 19

I want to set up a home lab at my apartment to have some fun. It will mostly act as a media/gaming server and a playground for me to experiment with some shiny new technology. I am not expecting its duty to be mission-critical.

Do you think it would be a good idea to buy a used server such as this renewed HP DL360 G7 and just lie it on the ground? The hardware spec looks quite decent at this price, and I can imagine such servers must be heavy, noisy, and power-hungry, but I’m fine with these shortcomings. Are there any catches I’m missing?

  1.  

  2. 33

    If you’re going to run a server at home, I’d strongly recommend getting a physically large one (4U or 5U). The smaller the chassis, the smaller the fans it can accommodate, and smaller fans have to spin a lot faster to move enough air to cool the system. A 4U or 5U machine can be perfectly reasonable noise-wise; a 2U might be bearable if you’ve got a high tolerance for noise and don’t run it hard, but you almost certainly don’t want the fan noise of a 1U system in your home. (I do work-from-home server firmware development, including fan-control code, and currently have four 1U systems in my home office; they’re loud little buggers.)

    Another thing that might be of interest are convertible rack/tower systems – at least Dell and Supermicro (probably others as well, but those are the two I’ve dealt with before) make systems that can be converted between rack and tower configurations by swapping a small amount of mounting hardware. If you’re not going to keep it in an actual rack, having a rack-mount chassis doesn’t serve much purpose, and it’d probably be easier to deal with (reduced footprint, if nothing else) in a vertical (tower) orientation.

    1. 8

      I second this. 1U servers can be loud enougnh to hear two floors away, an intense high-pitched whine. Avoid.

      1. 1

        I’d say go for 4U and above if you plan to use GPUs or other full-height PCIe devices. Otherwise 2–3U is a decent compromise.

        And, yeah, 1U is loud as hell. https://b1nary.tk/b/server-fans.mp4

      2. 16

        I think you’ve listed all of the caveats. You might not be aware of the degree though:

        • Rack-mounted servers are heavier than they look. I’m always surprised when I pick one up.
        • They are super noisy. Typically the fans in desktops are connected to temperature sensors and slow down when not in use. Rack-mounted systems usually run at full speed all of the time because replacing a machine in a server room due to thermal failure is more expensive than the power draw for the fans (and because consistent airflow in a server room is often important to cooling everything else). Their cooling is also often designed for an air conditioned room so may not work well if the input air temperature is too high.
        • Rack-mounted servers are increasingly designed to run at full power all of the time (as one Google engineer put it: if you have idle servers, you’ve bought too many servers. Power management should be a thing datacenters do, not a thing nodes do), so their power consumption may be higher than an equivalent desktop-case system.

        Of course, this can all be offset by the fact that rack-mounted hardware is intrinsically fun.

        The server that you link to looks pretty reasonably priced. If you want to play with multi-socket NUMA things, it’s probably a good system. Multi-socket has a lot of problems that single-socket doesn’t. That said, it’s pretty slow. It’s a 2010 CPU and a fairly low clock frequency.

        The other caveat if you want to play wth new technology is that the server CPUs typically get ISA extensions last. This is especially important when you’re looking at something old. For example, the CPU in that has only SSE 4.2, no AVX. Definitely no SGX, no MPK, and so on.

        A low-power core like the J5040 comes with SGX and MPX (which is an incredibly stupid design and is gone in newer cores, but is fun to play with) and can support 64 GiB of RAM if you buy the right modules (this is what I bought when I upgraded my NAS). It’s slower, but it’s modern hardware and might be more fun for playing with new technology. Not sure if it would be fast enough to run a gaming server

        Oh, and server systems typically don’t come with GPUs, and even a slow GPU with a modern feature set is a fun topic.

        1. 2

          Typically the fans in desktops are connected to temperature sensors and slow down when not in use. Rack-mounted systems usually run at full speed all of the time because replacing a machine in a server room due to thermal failure is more expensive than the power draw for the fans (and because consistent airflow in a server room is often important to cooling everything else).

          I have never encountered a server for which that was the case – every one I’ve dealt with has had temperature-based fan control in some form or another (once the BMC’s booted enough to provide that service anyway, though they’ll typically be at full speed when you first power on the machine). They often have BMC configuration options available to run the fans at 100% full time if you decide that’s what you want, but I’ve never seen one that didn’t have variable fan speed available (and enabled as the factory default). Also, that power consumption is decidedly non-trivial – maxed out fans in a 1U system can easily draw well over 100W; most data center operators probably don’t want another whole heavily-loaded CPU’s worth of power getting sucked down by every server if it’s not actually needed.

          1. 4

            Prior to around 2010-2012ish, all Dell’s rackmount servers ran the fans at full speed permanently. I distinctly remember R410 series being unusually not-horrible because they were the first ones that were quiet when I had them on my desk setting the OS up prior to taking them to the colo.

            1. 2

              It depends on the scale of things, I guess? I do know of some big operators that configure pretty much everything that has fans in it to run them at full speed, all the time, too. They figured, as david_chisnall mentioned, that the extra airflow headaches, motor wear and tear (though I’m not fully convinced about this one), the additional response time (takes a little time between when the fans kick off and when the temperature re-stabilizes) and so on are really not worth the money you save on 0.1 KW on a unit that easily draws 2.5-4 KW. Maybe, for reasons of economy of scale, they can get servers without the sensors, too.

          2. 11

            Don’t bother. They’re expensive, loud, awkward, heavy, and inconvenient. They’re designed for a very specific single purpose environment and your home just isn’t it. If you want to throw something in a rack buy a 4U server case and put a regular desktop system in it, or an ASRock Rack thing. But even then, plugging a BMC it in to the same network as everything else can cause mystery problems.

            1. 7

              Don’t bother. They’re expensive, loud, awkward, heavy, and inconvenient.

              Hmmm. I received the same advice from Lobste.rs a few years ago, ignored it, and had a ball :)

              I mean, you’re not wrong. They are loud, awkward, heavy, and inconvenient in most ways - and expensive as hell if you buy new (but cheap otherwise).

              But in my case at least, the enjoyment I got from it outweighed the negatives.

              1. 1

                plugging a BMC it in to the same network as everything else can cause mystery problems.

                Curious what sorts of experiences might have prompted this comment – certainly proprietary vendor BMC firmware sometimes does wacky stupid things, though I’ve never encountered any that caused problems beyond the BMC itself…though if you happen to get hardware that can run OpenBMC (and ASRock Rack is likely one of the best options there in terms of basic COTS stuff) I’d hope the odds of such wackiness are much lower (and you might actually be able to debug and fix it if it does misbehave!).

                [I work on OpenBMC and have put a fair amount of time into porting it to ASRock Rack systems.]

                1. 2

                  I don’t remember the details, but it had to do with IPs being assigned to multiple devices in some scenario I think.

                  I don’t suppose you have some info handy for OpenBMC for ASRock Rack? I’d love to get rid of their default implementation =).

                  1. 1

                    I don’t suppose you have some info handy for OpenBMC for ASRock Rack? I’d love to get rid of their default implementation =).

                    Well, if you happen to be using an E3C246D4I there’s support in place in the mainline OpenBMC tree; there’s also work underway on the X570-D4U and ROMED8HM3 boards. Drop in on the mailing list or Discord (https://github.com/openbmc/docs/#contact) and you can certainly get some assistance (likely from me, though there are others working on ASRock systems as well).

              2. 7

                Let’s talk specifics. That machine is fine, except for the disk subsystem:

                • HP’s Pxxx RAID controllers are one of two series which have lost data for me. It might be better now, but I’d much rather have individually addressed disks and use mdadm or ZFS.

                • That’s just not enough disk space, and SAS spinners are not much faster than consumer SATA spinners. You will want to replace them with 1 to 4TB 2.5” SATA SSDs as soon as you have the money. This will also reduce the power draw and thus the heat, so that’s a bonus.

                Oh, the RAM is adequate but you will want more. Get it soon; DDR3 RDIMMs will start to disappear soon. 16 x 16GB sticks will cost you about $600 for new-old-stock. 8 x 16 might be enough.

                Also – rack servers assume that the air is clean and filtered. They have no dust filtering. Putting it directly on the ground is probably a bad idea.

                1. 1

                  I agree that 4x300 GB is probably too small for serving up video and gaming unless you have a fairly small collection, but I’m curious about how you arrived at that large a requirement for SSDs (I’m reading you as saying 4-16 TB of SSD capacity). I’m looking at ~11 MB/min of video for 2160p 4K video resulting in average movie size being ~1.2 GB. 1-2TB of effective capacity seems sufficient,

                  Your point about power use is good though given how expensive server SSDs are I think you’d have to do some math to figure out if it’s worth it (to be honest though, I haven’t done that: I’ve got a solar array that reliably makes more energy than I use so I’m pretty cavalier with the power use).

                  Also not sure why you’re recommending that much RAM. I’ve got 48 GB and never use a significant amount of it, though I don’t have any containerization of virtualization running at the moment (it’s on my todo list but alas, life…).

                  In short, what’s the intended uses you’re recommending that configuration for? That seems a lot for what nalzok’s proposing to use his for.

                  1. 1

                    nalzok says: “home lab… media/gaming server and a playground”.

                    Such a machine should use RAID10, or for ZFS, stripe over two pairs of mirror vdevs. I’m recommending between 2TB and 8TB of effective storage.

                    My media server (MythTV, Owntone, a few other services) has 6TB in RAID10. It’s nearly full and I have to make decisions about pruning recorded TV fairly regularly. Just about 1TB of that is music, the rest is video, recorded or ripped.

                    Why 128-256 GB of RAM: because (a) everything you cache in RAM is faster and less wear on your storage; (b) home lab servers have a tendency to run out of RAM for running VMs and containers before they run out of non-peak CPU; (c) now is the time to buy it, when it is available, as opposed to three years from now when you wish you had bought more RAM but nobody has it.

                    1. 2

                      Thanks, maybe I’m just underestimating how memory hungry the setup will become once I start setting up VMs. This is making me consider buying more memory.

                      I still don’t understand that much video. At my average for storage that equates to something like 7.5k hours of video for me.

                      1. 1

                        MythTV Recording Statistics

                        Number of shows: 441 Number of episodes: 24902 First recording: Monday February 19th, 2007 Last recording: Sunday December 5th, 2021 Total Running Time: 14 years 9 months 16 days 2 hrs 45 mins Total Recorded: 2 years 1 month 6 days 8 hrs 52 mins Percent of time spent recording: 14%

                2. 6

                  I use SFF business desktops for servers since I don’t need much. They’re much quieter, and cheap too.

                  1. 2

                    That’s what I do. And Intel NUCs. Cons: they are mobile chips and they don’t rack as well (I use a 1U tray). Pros: the stuff you said. It’s been fine. Next little server will be an mini-ITX thing which is the same idea.

                  2. 3

                    Some observations from buying a stupidly large rackmount case and using it to house:

                    • several Raspberry Pis (OpenVPN, Home Assistant, Pi-Hole)
                    • an X200 games ‘server’
                    • a switch
                    • my NBN modem
                    • a consumer class router
                    • a TP-Link mesh network base
                    • a Pi Zero with LED status lights for each server + external Internet
                    • a refurbished DL320 G6 ProLiant as a 4TiB NAS

                    It looks fantastic :) Especially since I “connected it to the cloud” ;)

                    https://i.postimg.cc/C5N8GG5k/the-cloud.jpg

                    https://vimeo.com/manage/videos/577062093

                    Having everything in the one enclosure is great during power outages (not uncommon where we are). Makes it easy to hook up to a generator. Also great for cleaning and tidying.

                    It’s neat to play around with actual server hardware. First time I’ve ever done it and the Linux-based monitoring software is neat too. Plus fast hardware RAID is nice for its purpose as a NAS - I have 2 x 500GiB HDDs for the OS, and 2 x 4TiB HDDs for the NAS itself.

                    It was cheap. Only cost around AUD250 plus shipping! Although our power bill has increased 27% year on year. I’m not sure how much of that was due to WFH during lockdown, though … we’ll see over the course of the coming year.

                    I’ve never used a KVM either, and it’s neat having a fold-out terminal built into the server cabinet (and yes that’s a Apple MacPro keyboard, circa 2007. I was missing a few cables for the KVM at the time …).

                    https://vimeo.com/579389530

                    Finally, @1amzave is dead right about 1U servers being noisy (just listen to that video above!). That ProLiant sounds like a 747 on takeoff during POST and boot. Fortunately my bedroom is downstairs, and the kids are used to the house sounding like a data centre anyhow.

                    1. 2

                      Are the LEDs on a small mezzanine board on top of the Pi Zero? What board is that? That’s pretty cool!

                      1. 2

                        Yeah - it’s a Blinkt board, originally purchased as a build light. They’re run by a super low-fi Python script that keeps an eye on things:

                        #!/usr/bin/env python3
                        
                        from blinkt import set_pixel, set_brightness, show
                        import os
                        import time
                        
                        servers = [
                            # These are sorted to allow easy identification.
                            "blinkenlights.local",
                            "gaming.local",
                            "homados.local",
                            "pihole.local",
                            "vpn.local",
                        
                            # A hacky public Internet access indicator comes last.
                            "tpg.com.au"
                        ]
                        
                        while True:
                            for i in range(8):
                                if (i >= len(servers)):
                                    set_pixel(i, 0, 0, 0)
                                else:
                                    set_pixel(i, 0, 0, 3)
                                    show()
                                    result = os.system("ping -c 1 " + servers[i])
                                    if result == 0:
                                        set_pixel(i, 0, 1, 0)
                                    else:
                                        set_pixel(i, 10, 0, 0)
                            show()
                            time.sleep(60)
                        

                        The kids learned, during lockdown, to go and take a look at the cabinet lights if their Zoom or multiplayer game dropped :)

                        There’s a nice effect in there, which is that lights are off if unused, red if down, green if up, and blue while being pinged, so you get a nice ripple of blue across the green (or red!) every minute.

                        Relevant to this thread, particularly, is that my wife named the DL320 “Homados”, which is an ancient God of battle-noise. That thing is not quiet :)

                    2. 3

                      On the specifics of the machine you’re looking at, I’d hightly recommend Dell instead of HPE - Dell are a lot more friendly with things like BIOS updates (HPE require a service contract to access BIOS updates). In addition, certain HPE machines run the fans at full speed when you use “non-approved” PCIe adapters.

                      I’d look at something like a Dell R720 - it’s a generation newer than that DL360 G7, still takes cheap DDR3 RDIMMs and it’s fairly quiet. If you don’t mind Reddit, /r/homelab is a very useful resource. So are the ServeTheHome forms.

                      1. 3

                        I hesitate when you say “apartment”, “noisy”, & “fine”.

                        I recommend either 1) set up a VM on hardware you already have, or 2) just rent a VM on Digital Ocean / Vultr / &c. Once it’s got an IP and a domain name you can reach it by, who cares where it’s at? Just set up Virtualbox / Hyper-V and get debian or suse running and you’ll be happy, I’d think.

                        If you really want your own hardware, get a tower like a Z820 / Z840 / &c. At least that way you might could replace the fans with quieter ones. Personally, my SuSE-on-Hyper-V is in a closet 2m to my right, I hate noise.

                        1. 3

                          It will mostly act as a media/gaming server and a playground for me to experiment with some shiny new technology. I am not expecting its duty to be mission-critical.

                          In my experience, home servers have a sneaky habit of turning “mission critical” gradually. Make sure backups are there before experiments start…

                          1. 2

                            I ordered https://www.newegg.ca/HPE-ProLiant-MicroServer-Gen10-Plus-P16006-001/p/09Z-01S1-000N5?Item=09Z-01S1-000N5 for my new NAS (if it ever gets here… being cut off (by land) from the rest of Canada is…. annoying - just checked and it looks like its arrived in the region!).

                            Decentish specs, compact, quiet? and not too bad a price (on sale)

                            Dell had some pretty good deals going for their tower servers too (but I don’t have room hence my choice).

                            1. 2

                              Data point for you… the machine you linked has CPUs that get a geekbench of 507/3069 (single/multi). The Ryzen 7 4800U gets a geekbench of 1030/5885.

                              I have a SimplyNUC Ruby R7 with that Ryzen CPU, 64GB of ram, and two 2TB SSDs. It’s substantially more power efficient than that server. The only downside is fewer cores overall (8x2 vs 6x2x2).

                              That said, the NUC is the VM host for my k8s cluster, Prometheus/grafana VM, 2x local dns resolvers, a backup WireGuard server, and a smattering of test VM’s.

                              I went through the same thought process you’re talking about now a year ago. Decided on the NUC for power, noise, and space - but especially power.

                              edit: I should clarify - while that server linked is substantially cheaper than the NUC, the power, noise, and “accoutrement” of rack mounted servers is going to add up. So I balanced that cost with my available disposable income at the time and was fine with paying more for something tiny.

                              1. 2

                                I would agree here. The NUC will be silent, the rack wont.

                              2. 1

                                I bought a used IBM x3650 M3, it’s big and heavy (18kg, twice my bike), and so loud that I can hear it from upstairs. I had to turn it off because of the noise, but have been reading articles about people successfully modding the fans. Aside from that, the thing is a beast from the past with 64gb of RAM and a 12 drive array with hardware RAID. If any of you have suggestions on replacement fans, let me know!

                                1. 1

                                  Noise and/or heat is going to be an issue in a 1u server like that. If you want a rack mountable case then aim for a 4u or larger, but really just build a tower if you’re not going to put it in a rack.

                                  1. 1

                                    Coming late to this thread, I need to suggest building a [lackrack](” https://wiki.eth0.nl/index.php/LackRack), also known as “the ultimate, low-cost, high shininess solution for your modular datacenter-in-the-living-room.

                                    1. 1

                                      You’ve basically hit all of the cons of this approach, that hardware will be pretty heavy, very loud, and have poor power efficiency, but in my opinion playing with server hardware is worth it! I had the same setup going for a while and it works well, especially if you can put the server in a small rack (something similar to this https://www.amazon.com/dp/B087G8C6X7)

                                      1. 1

                                        There’s a bit of a subculture doing stuff like this, one prime example is serverbuilds.net which is a nice resource when starting getting into this hobby.

                                        I don’t use rack mounted stuff, but I’ve bought (and still use!) some used enterprise stuff, mostly workstation equipment. DDR3 based stuff is pretty cheap, DDR4 ECC is still quite expensive.

                                        Have fun with your homelab!

                                        1. 1

                                          The newer it is, the better. A lot of bargain basement rackmount servers of a certain age did not have the ability to reduce fan speed when thermal load is low, so they’ll just stay at full horrible noise all the time.

                                          The cross over point is somewhere around 2012. Avoid rackmount servers older than that. From what I remember, Dell R410 and R415 servers were the first I saw from them that had automatic variable fan speed.

                                          1. 1

                                            My brother and I have an R610 running Debian, a couple of switches, and a rack mounted router in a full rack he picked up for free from work. At the moment it sits in his office about four or five feet from his desk. Fan noise is an issue sometimes, though that’s improved significantly since we junked a firewall we had been playing around with. I do have to make sure to not place it under heavy load when he’s in the office though. The long term goal is to hang dry wall and divide the office into a server room and office proper to mitigate the noise and give access to the rack, circuit breakers, etc… without cutting through the office.

                                            The Dell certainly doesn’t run with fans at full speed all the time. There’s noticeable differences in fan noise between when I have it at full load (usually transcoding video) and when it’s running its base load (Miniflux, Gitit, NGinx, Huginn, Emby, semifunctional QGis server, tmux session with way too many windows). The ASA we were playing around with ran far louder.

                                            The one thing I think you’re totally missing (or at least didn’t mention) is that you mentioned you want to use the server for gaming but didn’t mention plans for a video card. I’ve played around with some not particularly graphics intensive games (Paradox grand strategy junk) on mine and lack of video card even for that is an issue. 1U chassis servers are often difficult to fit with video cards (both in terms of space and power within the unit). There’s external solutions to that (e.g. this) but those tend to increase the noise even more.

                                            I don’t see information on what the riser card setup is for that server on the Amazon posting, which is going to be another major consideration when it comes to fitting a card. I think there’s some discussion of this specifically for that server in r/homelab, which I highly recommend as a resource.

                                            For the first few months we had our server we ran it sitting on the bottom of the rack rather than on rails. I never saw a problem with heat management even when transcoding video for hours at a time though that was metal on metal. I’d be careful to check operating temperature if you’ve got it on some less heat conductive surface although it likely won’t be a problem.

                                            You may want more disk space if you want to run in anything other than RAID 0 unless you have a fairly small media collection and are happy uninstalling and reinstalling games regularly. Keep in mind that RAID arrangement can affect read/write performance for disks. Not sure how much video you’d expect to stream simultaneously. We’ve tested with six video streams simultaneously from a RAID 10 setup of 4x500 GB drives running at 5.4k IIRC and not run into any problems (we don’t actually run more than three video streams at the same time in normal use though, yet - will likely change when my kids are older).

                                            I echo the recommendation to look at a larger unit (2U or more). I want to pick up an R510 at some point so I’ve got more space to work with in terms of additions.

                                            1. 1

                                              I found that having a racked server to be exceptionally loud, distracting, and wasteful in terms of space. It was super heavy and I had put it into a rollable case, which took up too much room.

                                              My home “server” for a long time was a Linux desktop I cobbled together from garage sale and thrift store parts. I tried my own rack mounted server at this point, and then after realizing the issues I got rid of it and I built a new Linux desktop to use as a home “server.”

                                              1. 1

                                                If you live alone in a hermit’s cabin in the wood, you can do it. The whine of hte ~12 1U fans is incredible.

                                                Also, with any computer: never lie it on the ground. put it on a table! In every room there is an inversion layer on the ground in ~ 90-110cm height which is much more dusty than the rest of the air in the room (which was also the reason why in oven-heated old times the beds have been much higher). So on the ground it is too dusty (it will clog the heat sinks in about 2-3 weeks and make the fans even louder), under the ceiling it is too hot. choose the middle.