Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.
Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.
Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:
$ nix-collect-garbage -d
75031 store paths deleted, 215436.41 MiB freed
What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…
A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.
Until a few years back, I was also running a dedicated Hetzner server. From an availability point of view, this is a bit a source of stress as everything is running on a single server who would get problems from time to time (most common being an hard disk failure, promptly fixed by Hetzner technical team). I am now using several VPS as it gives me redundancy. Sure, you don’t get as many memory and CPU for the same price.
Yeah, I’m aware it’s putting a lot of eggs in one basket, however given most of the important services are either stateless or excessively backed up I’m not practically concerned.
This is why I’m using an KVM host with managed SSD RAID 10 and guaranteed CPU,Memory and Network*. Yeah you will have always some more performance on a bare metal system you own, but I didn’t have to work around a broken disk or system since 2012 on my personal host. I still have enough performance for multiple services and 3 bigger game systems + VoIP. The only downtime I had was for ~1h when the whole node was broken and my system got transferred to another host, but I didn’t had to do anything for it. That way I didn’t have any problems even on the services that need to run 24/7 or people will notice.
*And I don’t mean managed server, that’d be far too expensive. Just something like this.
That’s quite the overkill personal server indeed. I’m running my website on a t4g.micro (free period was just extended for 3 more months!) right now. Used to use a1 before t4g became a thing, tried to save money by running the a1 as a spot instance (sometimes that even ran uninterrupted for many months) :D
As of yesterday, I now run my website on a dedicated server in the Netherlands. Here is the story of my journey to migrate […] to this new server. […] This server is an AX41 from Hetzner.
Am I misunderstanding something here or does Hetzner now have servers in the Netherlands? As far as I understand you did migrate to Hetzner, so now you’re on Hetzner? But Hetzner to my knowledge has servers in Germany and Finland.
I must have missed something in your article. Were you running a single-node Kubernetes cluster off a Hetzner dedicated server and now running most of your services as ordinary processes on NixOS?
Ahh I see. Thank you! I somehow missed this in your write up! Thanks :D I run my own servers at home with a little server room and rack cabinet. 3x node Proxmox VE cluster + 1x 10 3.5 + 4 2.5 NAS. Most of my stuff runs in Docker Swarm clusters (VMs atop Proxmox VE).
It’s always nice to be able to start again from scratch and make sure all the “cruft” is now reproducible and documented. :)
Are you using ansible at all or NixOS makes it obsolete?
Anyway, I always like your posts, thanks for sharing <3
NixOS absolutely obsoletes Ansible. Plus you don’t need to write yaml.
Have you looked at ZFS Datasets for NixOS? I always do something like this on my boxes.
Also, as for pool options for SSD boot pools, here’s what I generally use:
Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.
Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.
Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:
What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…
Would it be a good idea to add that as a cronjob perhaps? What would be the downside?
A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.
I think you can also set up the Nix daemon to automatically optimize the store. It’s buried in the NixOS options somewhere.
Nice, I didn’t know about that. The setting is
nix.gc.automatic
, by the looks of it.“It’s buried in the NixOS options somewhere” is going to be both a blessing and curse of this deployment model >.>
Here’s hoping people document their flakes well.
Until a few years back, I was also running a dedicated Hetzner server. From an availability point of view, this is a bit a source of stress as everything is running on a single server who would get problems from time to time (most common being an hard disk failure, promptly fixed by Hetzner technical team). I am now using several VPS as it gives me redundancy. Sure, you don’t get as many memory and CPU for the same price.
Yeah, I’m aware it’s putting a lot of eggs in one basket, however given most of the important services are either stateless or excessively backed up I’m not practically concerned.
This is why I’m using an KVM host with managed SSD RAID 10 and guaranteed CPU,Memory and Network*. Yeah you will have always some more performance on a bare metal system you own, but I didn’t have to work around a broken disk or system since 2012 on my personal host. I still have enough performance for multiple services and 3 bigger game systems + VoIP. The only downtime I had was for ~1h when the whole node was broken and my system got transferred to another host, but I didn’t had to do anything for it. That way I didn’t have any problems even on the services that need to run 24/7 or people will notice.
*And I don’t mean managed server, that’d be far too expensive. Just something like this.
That’s quite the overkill personal server indeed. I’m running my website on a t4g.micro (free period was just extended for 3 more months!) right now. Used to use a1 before t4g became a thing, tried to save money by running the a1 as a spot instance (sometimes that even ran uninterrupted for many months) :D
Am I misunderstanding something here or does Hetzner now have servers in the Netherlands? As far as I understand you did migrate to Hetzner, so now you’re on Hetzner? But Hetzner to my knowledge has servers in Germany and Finland.
That’s my bad. I’m deploying this fix: https://github.com/Xe/site/commit/09c726a0c9f66cc56aa13026fcc91e2f18bd9761
I must have missed something in your article. Were you running a single-node Kubernetes cluster off a Hetzner dedicated server and now running most of your services as ordinary processes on NixOS?
No, I was using Digital Ocean hosted kubernetes and now I have a single hetzner server running services as normal unix processes on NixOS.
Ahh I see. Thank you! I somehow missed this in your write up! Thanks :D I run my own servers at home with a little server room and rack cabinet. 3x node Proxmox VE cluster + 1x 10 3.5 + 4 2.5 NAS. Most of my stuff runs in Docker Swarm clusters (VMs atop Proxmox VE).