Now that my home has (somewhat) fast Internet access, I’m looking to set up a rack-mount server at home, and buy a PineBook Pro running FreeBSD as a mobile terminal. I’d then retire my current dev machine (ThinkPad W540 running Ubuntu) as a games and media machine.
However, I’ve never run serious server-class hardware myself, and was wondering if the Lobsters community had any hints, tips, or warnings to share?
My thinking at the moment …
Being in Australia (thus, hot in summer), I’m looking at something like this small rack-mount cabinet with some added fans.
I’m thinking of starting the experiment cheaply, by buying something like this used Proliant DL360 and replacing my ghetto NAS with it. I note that the Proliant and many similar servers have hardware RAID; I assume I’d be best off eschewing this for software RAID to make recovery to different hardware feasible?
I’m assuming that my current speeds (~ 90Mb/s down, ~ 30Mb/s up ) will be adequate for most purposes (SSH session w/ tmux, SWANK (for SLIME), etc.). If anyone is running this sort of setup themselves, what speeds have you found necessary?
Don’t. They’re loud, heavy, inconvenient, and expensive.
To add to this: unless you specifically want to learn about enterprise technologies (RAID, SFPs, etc.), go with consumer hardware. You’ll save money on parts, electricity, mounting, etc. and won’t have to deal with the noise as much. NUCs are great if you want something small and powerful(-ish), or Mini-ITX, mATX for small form-factor custom builds. The consumer space has a lot more flexibility than enterprise hardware, which lets you fine-tune your build for whatever usecase you have.
I second the NUC comment. I have two set up at home and they’re awesome.
Agreed.
I had a rack-mount 2U Compaq DL380 with all the bells and whistles that I got for free during the dot-com bust. It was a pain to live with:
Do the planet and yourself a favor and go with consumer-grade hardware.
I’m not an environmentalist, but the prospective impact to my power bill has me concerned. $10 / month to run a fast machine would be okay, though. I’ll have to do some more research into TCO I think.
They also eat power. See vermaden’s posts (e.g. https://vermaden.wordpress.com/2019/04/03/silent-fanless-freebsd-server-redundant-backup/ ) on running a simple server if you really need something.
(bias: I’ve done the freenas thing and paid into the myths of server hardware/ecc ram/etc. While it’s come in handy at times (ipmi is convenient) it’s also been a time burden when it comes to doing maintenance or trying to do any slight customization in setup from what freenas wants. If your needs are largely just storage, consider just leveraging your fast internet connection and paying a cloud provider. Remember even with a server-grade NAS you’re arguably still going to be paying someone for off-site backup.)
[Comment removed by author]
Ah, I see you are a person of taste and distinction as well! :D My experience doing this is about a decade out of date, but hopefully some of it is still relevant. Main things I recall:
borg
is the best solution I’ve found.These days, I just use a purpose-built desktop machine for my home server needs and it does fine. But if you want to play with server hardware, it’s a slightly different world and I encourage you to give it a go.
Thanks for the detail in your reply :). Yeah to be honest, curiosity is a large driver here. I’ve never run hardware like this and on paper it seems like it’d be a good bet. Opinions on this site seem to vary though :)
Yep, I encourage you to give it a go if you’re doing it for curiousity. Just know that, as others have said, they ARE loud, inconvenient, heavy and expensive. Those are the tradeoffs that datacenters make for their advantages, namely, it’s easy to shove tons of them into a rack and their reliability and ease of servicing is a cut above consumer hardware. Those advantages usually aren’t worth it for home servers for most people… but I also have friends who have a small cabinet like the one you linked tucked into their basement and it works just fine. (If you can find one, I might recommend a cabinet with a screen front rather than glass, that should hopefully reduce the need for extra fans.)
I’ve done that at work because we needed our own infrastructure and I found a great deal on used equipment (just required to loan a truck to go to another state and load it from the storage it was in and going through stairs with large UPSes). I would:
Recommend for it if you want to learn how to deal with “pro” servers. There is so much I learned doing that: SAS, real RAID cards with battery backup for the cache, servers boot time, monitoring hardware and UPSes, automated deployment, mounting a rack, finding the right kind of rails (what a pain, there are so many models), distributing power supplies, VLANs, making and running your network cables properly, professional routers and switches and so many other things…
Recommend against it if you just need something that work, I would go with consumer machines for that, they are much more silent, they boot fast, and getting spare parts is cheap. Of course they are usually less reliable (no ECC-RAM, less rugged, lower quality components), but if you have good backups or redundancy, it shouldn’t matter much. And really hardware RAID (at least on the DELL PERC I have) has caused me more issues than it solved. As was said in other comments, old servers tend to use a LOT of power (so heat!) compared to today’s consumer systems for the same efficiency. And really if your room doesn’t have a good AC or air recycling system depending on the area, you are in for a surprise. The servers I got are most of the time not that noisy (bunch of Dell R[67]10 and HP DL380G6) and could clearly be in a room adjacent to a bedroom as long as they are not all at 100% CPU (or that you don’t force fans to max speed). In that case, they become pretty noisy, but it is not that bad (and I have always 5 machines running at a time, with others ready as backups).
For dev, I would not trade my workstation with its NVMe drive for anything else… Just as an example (using mostly memory, but the loading time was reduced by almost the same ratio), that $1000 machine (i7-8700K, 32G, samsung 960pro nvme) runs a bunch of queries (250,000 to be exact) on redis (single instance in docker) in 2s whereas it takes 12s on a Dell R710…
It’s not really clear what you’re trying to do here. Racks are designed for high-density hosting, where you want to pack as many machines into as small a space as possible. They often expect to be placed in a room with external cooling. Rack-mounted computers are typically designed to pull cold air from the front and push it out of the back, so they expect a source of cool air at the front and something that can take away the hot air from the back (in my last job, if I cycled to work in the rain, I’d often go and stand behind one of the server racks to dry off - a few kW of waste heat in air that the aircon system had dried out before pushing into the front of the racks dried me off very quickly and wasn’t enough to set off the humidity alarms in the aircon system).
I can imagine a couple of good reasons for wanting to do this. If you want practice managing rack-mounted systems for a job in a medium-sized company, having a small rack at home is useful to practice on (though note that the problems involved in running a small server room with a few racks are very different to the problems involved in running a tiny 4U rack). If you want to consolidate some server machines in a small space, a rack may be a good solution, but only if you’re at the 10-15 server sort of scale. Below that, small form-factor cases are likely to be easier to store and keep cool.
So my goals are to:
Switch from a powerful dev laptop to basically a thin client, running on a very powerful dev / general-purpose server.
Replace my current “stack of bits on a hall table” with a more permanent setup.
Upgrade my ‘NAS’ (Pi 3 w/ 2 x 2 TiB HDDs running in software RAID) with something fast and good, that’s capable of streaming media to multiple devices.
Have some fun with something a bit outside my comfort zone.
For 1. and 3. the single threaded performance of a server processor can be pretty lousy, so it may not be faster for a given workload. There are sites which have public benchmarks like https://www.cpubenchmark.net/ , cpuboss, or geekbench you should check first.
For 2. and 4. you can buy ATX server-grade equipment and slap it in an tower case. That solves the noise problem; what’s noisy is the small fans moving a lot of air. HP and dell are likely to have proprietary motherboards that do not fit in an ATX case; don’t assume they will be compatible. It’s probably going to be more expensive to build from parts than buy a used HP server, and there are a lot of ways to screw it up - E-ATX is not the same as ATX, and there are different types of memory, not all of which have ECC, so if you buy memory separate from the motherboard you have to be pretty careful to make sure they match. Same goes for the processor, you need to confirm the socket matches what’s on the motherboard. Older motherboards may not support the latest CPU for a given socket. Different revisions of the same motherboard (1.10 versus 1.20) may also support different processor versions, for example old supermicro boards may support Xeon’s but not Xeon V2’s despite them having the same socket. That should be listed on the home page for the motherboard.
You would also need to make sure your power supply delivers enough watts. Off the top of my head a rule of thumb is 10 watts per 3.5” drive, 3 watts per ram stick, and 5 watts per SSD. Add-on cards, check the datasheet. There will also be a max power draw for the motherboard. Add this up and multiply this by 1.2 to get a safe number, as you usually don’t want to use more than 80% of what’s available. Or just buy an 800W supply or something, that will almost certainly be more than enough.
There are trayless hot swap bays both 2.5” and 3.5”, for example https://www.newegg.com/athena-power-bp-15287sac-other/p/N82E16816119044, which will fit a 5.25” bay in a tower case if you want to use RAID.
It’s not clear that a rack is necessary for this. My work desktop is a 10-core Xeon (20 hyperthreads) and I mostly use it for development remotely via RDP or SSH. I also use a bunch of cloud VMs. For a machine that you’re using 24/7, a cloud VM will be a lot more expensive than just buying your own hardware. For something you use intermittently, that’s not always the case. For example, I can buy time on a 64-core VM (probably actually 64 hyperthreads), 256GiB of RAM and 512GiB of fast local disk, it costs around £2.50/hour (a bit less for AMD, a bit more for Intel). Buying a machine like this costs me around £12,000, or about the same as renting the cloud VM for 4,800 hours. If I use it for 40 hours a week, that’s 120 weeks. I need to be actively using the machine for the full working week for about three years for it to be cheaper to buy it myself. I don’t actually need a VM like that 90% of the time, so it’s a lot cheaper to rent one for a few hours when I do want one than it is to buy it myself (especially when you consider depreciation: I’m not a huge fan of spending £12K on something that rapidly loses value).
I can definitely see the value for this but it doesn’t necessarily require a rack. For NAS storage, there are a bunch of very nice NAS cases of varying sizes (I have a Chenbro Mini-ITX that I’m more or less happy with) with removable drive bays and so on. There are also off-the-shelf servers. If you want to support 8-12 disks in a separate enclosure with SAS, then a rack is probably the right route, but if you’re looking at a flash cache and 3-6 disks in a RAID array, it’s not necessarily the best option.
This doesn’t require a rack, but it’s a great excuse for getting one!
This is a half-depth cabinet. Great for patch panels, most switches and routers, but your options for mounting servers are going to be limited.
In Dallas Texas I had a dedicated ac unit for the server room (my bedroom) to prevent the HP DL 580 G5 and other machines from cooking me. It augmented the flat’s ac adequately.
You may find it hard to enable “JBOD” mode so you can use software raid / ZFS, but it’s worth it for the reasons you state. Hardware raid is just too unreliable even across controllers with the same model number.
SSH and HTTPS are very low bandwidth for most locations. If you start having high volumes, you will feel it, and hopefully monitoring will tell you when you overload your network capacity. Streaming audio? Depends on the number of consumers and the actual bandwidth per consumer. That’s a calculation you have to do yourself, find online, or just assume you are alright until you have evidence to the contrary.
I managed to have home servers and still video call and stream video over a connection with 3mbps down and 0.7 Mbps up. It was painful, and I learned to download videos as much as possible since the videocalls were the point, but I managed.
I’ve been running an old dell r720 from ebay under my desk for a few months. It’s pretty quiet and doesn’t seem to need any extra cooling. The biggest advantage I’ve found with having a rack-mounted server is that I can log into the remote management console and kick it when I am not at home. Many enterprise tower workstations also have this feature, but a used rack-mount server is a lot cheaper than a used tower with comparable specs (at least it was when I was looking). I’d probably prefer the tower.
I was looking at buying used HP machines, but people have said that HPs are pretty unfriendly to “non-certified” PCIe cards and will spin the fans at 100% if one is installed. My dell did this too but I managed to find IPMI trick to disable this feature. Not sure if the HP rumor is actually true or not.
I use this machine as a ZFS box and for a number of other long running services (getting ZFS to work was a bit of a chore, I had to replace the built in raid controller with the “low end” model of the same controller, flashed with an IT mode firmware (these can be purchased on ebay preflashed by hobbyists))
On speeds, I have much faster internet than you and still find it irritating that my local laptop drive can do 2-3 GB/s (NVMe) but if I want to send something to my NFS mount, I’m constrained to (best case) gigabit. My local network is all 10 gig ethernet for this reason. As others have said, ssh/tmux probably don’t need lots of bandwidth (though if you have a really crappy provider, you might have latency issues).
some details are https://dpzmick.com/posts/2020-01-11-server-network.html and https://dpzmick.com/posts/2020-01-12-nas.html
That is a large factor in my thinking so far :)
No, no, no. They sound like jet engines. No. Unless you’re talking about a rack of Raspberry Pis.
I replaced a desktop machine I’ve been using as a server for years with a rack-mount server running Proxmox several months ago. I don’t actually have a proper rack for it right now, I have it plopped on a metal wire shelf (one of these things ), and that seems to work okay, although I live in a fairly temperate climate. I built mine from parts in a 4U chassis, which was not the cheapest way to do this. Buying a used rackmount server with hardware already installed seems perfectly sensible to me.
SSH should work fine with that amount of bandwith, although it does seem a little low for things you might want to do like serving video or audio.
First, seeing as you are in Oz (thus, loads of sun) I’d start by getting some PV panels plus an inverter to offset the power used by the rack. That is, assuming you have the space for such.
I would not get anything earlier than a Gen7 to reduce power consumption and noise levels - with fans running at full tilt the things sound like rack-mounted jet engines. The fans do spin down to a more manageable level on a well-configured machine but woe to thee who adds anything but official HP hardware in any of the slots since that will keep the fans running at full speed at all times.
I made a rack out of some dumpster-dived supermarket shelves, lumber, a truck air filter and a forced draft fan. The thing doubles as drying cabinet for produce (mint, mushrooms, fruit etc.) by having the equipment in the top half of the rack followed by an air flow divider and 8 rack-sized metal-mesh-covered drying frames. From top to bottom the thing contains:
It runs Proxmox on Debian and runs a host of services including a virtual router (OpenWRT), serving us here on the farm and the extended family spread over 2 countries. The server-mounted array is used as a boot drive and to host some container and VM images, the DS4243 array is configured as a JBOD running a mixture of LVM/mdadm managed arrays and stripe sets used as VM/container image and data storage. I chose mdadm over ZFS because of the greater flexibility it offers. The array in the DL380 is managed by the P410i array controller (i.e. hardware raid), I have 4 spare drives in storage to be used as replacements for failed drives.
The rack is about 1.65m high, noise levels are manageable for having it in a room adjacent to a bedroom. It looks like this, more or less:
https://imgur.com/a/M4Lbf1K
In the not-too-distant future I’ll replace the 15K SAS drives with larger albeit slower (7.2K) SAS or SATA drives to get more space and (especially) less heat - those 15K drives run hot.
I chose this specific hardware - a fairly loaded DL380G7, the DS4243 - because these offered the best price/performance ratio when I got them (in 2018). Spare parts for these devices are cheap and easily available, I made sure to get a full complement of power supplies for both devices (2 for the DL380G7, 4 for the DS4243) although I’m only using half of these.
On the question whether this much hardware is needed, well, that depends on what you want to do. If you just want to serve media files and have a shell host to log in to the answer is probably ‘no’, depending on the size of the library. Instead of using ‘enterprise class’ equipment you could try to build a system tailored to the home environment which prioritizes a reduction in power consumption and noise levels over redundancy and performance. You’ll probably end up spending about the same amount of money for hardware, a bit more in time and get a substantially lower performing system but you’d be rewarded by the lower noise levels and reduced power consumption. The latter can be offset by adding a few solar panels, the former by moving the rack to a less noise-sensitive location - the basement, the barn, etc.
Thanks for all the details! That’s plenty to read through and digest.
The reason I was looking at fast hardware was for development - compiling Firefox, for instance (which grinds a bit on my W540).
I have a build container on the machine which I start on demand for this purpose, it was one of my reasons for building this system. I don’t have a Pinebook, instead I use older Thinkpads like the T42p - with their single-core Pentium M @ 1.8GHz, 2GB, 110GB SSD they’re probably comparable to the Pinebook except for the (better) keyboard and screen and (worse) battery. One of the containers on the server (aptly named session) runs remote desktop or single application sessions (through x2go) for a number of users at home and abroad. It also hosts a Nextcloud instance with several apps, e.g. Collabora Online (web-based Libreoffice). It also hosts a mail server (Exim on Debian, Dovecot, Spamassassin through spamd, greylistd, managesieve - I’ve been hosting mail for my domain for about 24 years, going from Sendmail to Exim, from spam being nonexistent to spam not being a problem), web services, communications services (Jitsi Meet, Nextcloud Talk, XMPP services through Prosody), media services (Airsonic, mpd, Peertube), local and remote search services (Searx combined with the recoll engine for local search capabilities), some experimental services based on things like Pixelfed and more.
While a fast build server is a good thing to have I do test and mostly use builds on other - slower - systems to not fall in the trap of building for fast hardware, forgetting that there are those who want/have to make do with slower systems.
I’ve been having a blast with the Microserver gen10 plus. You get (or can get) a lot of server features such as multiple nics, ILO, etc with a small form factor and very quiet. This might be worth looking into
If you do purchase one and intend on using iLo please be aware that you need an “enablement card”. Approx $80 USD
https://buy.hpe.com/us/en/servers/proliant-microserver/proliant-microserver/proliant-microserver/hpe-proliant-microserver-gen10-plus/p/1012241014
Happy to answer questions about this server if you are interested
I was thinking the same thing as you are, but ultimately decided to build a tower instead, mostly to get it quieter and faster. This is roughly what I ended up building (though it seems like AIO cooler has gotten much more expensive and should be replaced with a cheaper alternative) and I’m very happy with it. It does everything 3-30x faster than my 2017 MacBook Pro.
Pinebook Pro came a few weeks ago, and I haven’t been able to get it working reliably with an external monitor yet, but again I’m thinking along the same lines as you are. SSH, VNC/RDP/Xpra all work fine, and I can still do things locally if ever needed. EDIT: Raspberry Pi 4 is a better option for a desktop, since they can drive 2x4k monitors. I think I’m going to give up on “docking” the pinebook.
FWIW, WireGuard ties this all together for me. FreeNAS supports it now, so I can remain connected to NAS and dev workstation at the same IPs while roaming. As an added bonus, you can easily check in on things from your phone/tablet when you’re away from your “real” thin client.
That’s a shame. I was thinking of putting in an order for the next run, but that’d be a showstopper for me.
As great as the pinebook pro is, to my knowledge FreeBSD doesn’t support it yet. OpenBSD and NetBSD both have decent support though. You’ll definitely want to pick up a headphone-jack to usb-serial adapter though, no matter which BSD you end up with.
Also check out Void Linux. It’s a linux distro that reminds me a lot of OpenBSDs simplicity; it uses runit instead of systemd, and has a lot of OpenBSD software in packages. Runs fantastically on ARM64.
I have a selection of rackmount servers at home and can give you a few general recommendations:
A very useful resource is /r/homelab - it’s one of the better corners of Reddit.
Thanks - I’ll check out homelab too. I’ve found a few good corners of Reddit in the past … generally those without sufficiently broad appeal to attract trolls and culture warriors (of all stripes).
I used to have two racks full of servers in the spare bedroom of my two bedroom flat (in Epping, NSW). My electricity bill got to $50,000 per year. Heat was a big problem and the aircon and fans I had weren’t good enough. Noise was a problem too.
I had a fully populated IBM BladeCenter (fun fact: actually from the render farm which rendered Happy Feet the movie) and a bunch of other server class equipment (roughly 50 machines in all). I would advise against this.
If you need on-prem at home I recommend you check out something like the HPE ProLiant X3421 Gen10 Microserver and if you don’t need on-prem go for cloud. Now instead of a data center in my bedroom I have a X3421 running Ubuntu with 14 VirtualBox virtual machines and 8 EC2 instances in the cloud.
As for RAID I would probably recommend either ZFS or MD RAID, i.e. software RAID. I’ve seen lots of smart people botch recovery of both hardware and software RAID over the years and my current thinking is that if you’re trying to restore a RAID array you’re losing. My policy for small office / home office is if part of the RAID array fails replace the whole server with new equipment ASAP, don’t try to rebuild the RAID array, if you try to rebuild the RAID array I reckon their is a high chance you will simply lose your data. (RAID recovery is for doing in data centers, not home offices, IMHO.)