1.  

    I’m getting my IPv6 tunnel working. So far I’m testing it with a NixOS container using the IP 2001:470:1d:4ee:6b67:96d0:8cb3:7b12. Let me know what the latency is like!

    1.  

      From the UK, native v6 connection from the ISP:

      $ ping6 -c10 2001:470:1d:4ee:6b67:96d0:8cb3:7b12m
      PING6(56=40+8+8 bytes) 2a02:8010:62f1:2:615e:4497:76f3:b5c6 --> 2001:470:1d:4ee:6b67:96d0:8cb3:7b12
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=0 hlim=55 time=108.019 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=1 hlim=55 time=111.441 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=2 hlim=55 time=108.110 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=3 hlim=55 time=142.335 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=4 hlim=55 time=113.234 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=5 hlim=55 time=113.888 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=6 hlim=55 time=110.475 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=7 hlim=55 time=118.303 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=8 hlim=55 time=106.419 ms
      16 bytes from 2001:470:1d:4ee:6b67:96d0:8cb3:7b12, icmp_seq=9 hlim=55 time=108.323 ms
      
      --- 2001:470:1d:4ee:6b67:96d0:8cb3:7b12 ping6 statistics ---
      10 packets transmitted, 10 packets received, 0.0% packet loss
      round-trip min/avg/max/std-dev = 106.419/114.055/142.335/10.005 ms
      
    1. 3

      Yes for my own site the RSS feed is the most downloaded thing by far. I don’t include the page content so bandwidth is not a problem but it is weird that many readers are clearly ignoring the TTL and just requesting it over and over.

      1. 4

        Misconfigured readers/central aggregators have been a bane of RSS almost since the beginning. Developers don’t always read specs, or test.

        1. 9

          Even worse, commonly used readers like ttrss adamantly refuse to do the bare minimum like using etags. ttrss makes requests every 15 minutes to a feed that only really updates once per day at most. I’d have to convince all of my readers that use ttrss to fix their settings instead of them being a good net citizen and following recommendations to make the protocol work better.

          1. 5

            That’s horrific. I would even say antisocial. Shit like this degrades the entire ecosystem — RSS is already kludgy enough as it is without developers petulantly refusing to adopt the few scalability measures available.

            Plus, the developer’s response is childish:

            … get a better hosting provider or something. your sperging out over literal bytes wasted on http redirects on your blog and stuff is incredibly pathetic.

            Pathetic, indeed.

            1. 1

              Tempting to vary on a response header so one could prepend a bad netizen warning post to the top of the list for those readers that are being problematic.

              1. 1

                Having HTTP features (cache-control and last-modified) duplicated in the RSS spec is really annoying. Developers don’t want to write a custom cache for RSS feeds. I don’t know why supporting redundant caching measures encoded in XML would make a piece of software better. Why wouldn’t a HTTP cache be sufficient?

          1. 2

            Yes, that graph is showing in gigabytes. We’re so lucky that bandwidth is free on Hetzner.

            But it says 300 Mil on the left. And “bytes” on top. So I guess Mil stands for million, and 300 million bytes is 300 decimal megabytes, not gigabytes, unless my math is all wrong. Is my math all wrong?

            1. 1

              You’re correct that 300 million bytes is 300 MB (or around 286 MiB).

              1. 1

                My bad. I was reading the cloudflare graph when I wrote that. I think I uploaded the wrong image to Twitter. Oops. I’ll fix it.

                1.  

                  I think nevertheless your scale of “this could get expensive” would only be right if you were on a very expensive provider like google cloud. Or maybe if this were 15 years ago. Hetzner predicts that 20TB/mo is completely normal, and you are nowhere near that! A gigabyte in 2021 is a small thing.

                  Of course, it’s fine to plan very much ahead and optimize things, but maybe this will give people the wrong idea that it’s absolutely necessary to put cloudflare or a caching cdn in front of their website, or cut down RSS feeds. When even at your great level of popularity, it isn’t really needed.

            1. 1

              You should consider reworking your description fields. You should not be including the full post in the description.

              My website landing page is a feed, and as you can see, it includes all posts I’ve ever made, and remains tiny: http://len.falken.ink . My description fields are 1 sentence, describing my content.

              1. 17

                You should not be including the full post in the description

                Why not? I prefer sites I can read in my aggregator completely (so I don’t have to deal with whatever fonts and colors and fontsizes the “real” site uses). The feed doesn’t need to include every article ever posted, though. The last few is fine. Keep old articles around (or not), is up to the aggregator.

                1. 6

                  This is exactly why I put the article text in the description. I don’t think readers handle the Mara stickers that well though :(

                  1. -1

                    And that’s why you had problems, you used it for what it was not intended for.

                  2. 1

                    Because it’s for a description, what else do I have to say?

                    1. 5

                      Neither common practice nor the RSS 2.0 spec support your assertion that the description element should only be used for a short description.

                      1. 1

                        It literally says “The item synopsis”….

                        Are we reading the same thing?

                        1. 4

                          I am reading: “An item may also be complete in itself”, which I interpret that the whole post is allowed to be in there.

                          But even if you were technically right, it feels as unnecessary and wasteful to require that the user fires up a browser to get the remaining two paragraphs of a three paragraph post, because the first one was regarded as intro/synopsis and is the only one allowed to be in the feed. If people do that, I always get the sense that they force you to do that to increase their ego through their pageview counters.

                          Text is easy to compress. If it is still too much, one can always limit the amount of items in the feed and possibly refer to a sitemap as the last item for the users who really use that feed to learn about everything on your site.

                          1.  

                            If you read in the description field, it says what I wrote…

                            I agree with the logic where if you’re including some of the text, but then require to launch a browser to read the rest, it’s a waste.

                            If you’re delivering web content though, you’ll need a browser. You just can’t get over that. On my website, I don’t serve web content, I only serve plain text, for the exact purpose you mention: I don’t want my readers to launch a browser to read my content.

                  3. 4

                    You should not be including the full post in the description.

                    Your root-page as feed idea is nifty, and I think there are plenty of scenarios where concise descriptions along those lines make good sense. Still, for probably the majority of blog-like things, the full content of posts in the feed offers a better user experience.

                  1. 3

                    This is a nice essay on how to tune up a site that actually is getting enough traffic to get a tune-up. One of the newer things here is the requirement lately that any non-trivial popular site use a cdn. It’s simply too much of a waste to do otherwise.

                    1. 3

                      The amount of traffic I get continues to surprise me! I was surprised at a few details involving the size of the RSS feed (and how much it probably ended up costing me on the Kubernetes cluster).

                      I also threw cloudflare into hyper aggressive mode too (as opposed to the slightly aggressive mode it was in previously). Hopefully that should make the bandwidth costs (Hetzner has supposedly unlimited bandwidth, so let’s see if that’s actually true or not) even lower. Within a day or two I should have every asset on my site cached in cloudflare.

                      1. 1

                        Very cool.

                        I went back to self-hosting just after the holidays. I got tired of signing up for “free” platforms and then having the rules change later.

                        I’m on EC2 using CloudFront (and Let’s Encrypt for SSL stuff). Ghost and commento seem to be working fine, and I moved all of my image storage up to the cdn, but I’m still serving text and rss. I don’t know how it’s going to turn out for me. I just installed goaccess for a rudimentary bit of telemetry. I’m interested in following along to see how things are working for you. We definitely should compare notes.

                    1. 3

                      Another thing that helps, at least with marginally well-behaved clients, is to add the header Cache-Control: public; max-age=3600.

                      1. 2

                        I have this:

                        cache-control: public, max-age=86400, stale-if-error=60
                        

                        Is this sufficient? My feed isn’t updated more than once per day.

                        1. 1

                          Is this sufficient? My feed isn’t updated more than once per day.

                          I think that should be plenty! It blows my mind how clients can fall down on simple stuff like this.

                      1. 1

                        Would git’s compression and packing interfere with Borg’s compression and deduplication if you wanted to back up a bunch of bare git repositories?

                        1. 4

                          It wouldn’t interfere with it per se, but you can’t further compress data that has already been compressed, so Borg’s compression won’t be able to make the already-compressed data smaller and might make it marginally larger because of the overhead associated with compression headers (although I believe Borg is smart enough to not try to compress data it detects is already compressed). Certainly nothing will break - I’ve used Borg with compression settings on datasets that include already-compressed data (e.g. video files) without any issues other than not reducing the total data size very much.

                          1. 1

                            Compression can increase data size with already-compressed data, but not by a very large factor.

                          2. 1

                            I have not used Borg Backup for this purpose but used other compression programs as part of other backup systems. It doesn’t interfere, just doesn’t help much, as other comments have asserted. Happy to explain more if you like.

                            1. 1

                              Would it make sense to backup git repositories if one uses any of the forges? I’ve stopped backing up any of the .git folders as I’m assuming/hoping the work will survive on the forges even if something happens to my machine (that’s why I’m backing the working tree). Background: we had limited backup space at the company I worked for, and less files meant faster backups.

                              1. 1

                                In my case I am hosting a (personal) git forge, so I do throw the backups in borg as well. I have noticed no issues with doing this. I’m still well below my backup cap though.

                              2. 1

                                It might be worth avoiding letting git re-pack too often, although I don’t know if this is accurate. Possibly something you’d like to look into.

                              1. 3

                                Unfortunately, I had reliability issues with DigitalOcean’s Kubernetes offering, even on larger instances.

                                How have other people found DigitalOcean’s Kubernetes offering? I was looking at trying it out soon for a project.

                                1. 8

                                  I’ve found it’s a really great way to waste money. I recently moved off of DigitalOcean Kubernetes to a dedicated server running NixOS and I couldn’t be happier with the results.

                                  1. 4

                                    I’ve found it’s a really great way to waste money.

                                    Just to be clear: that’s Kubernetes in general, and not DigitalOcean’s offering in specific?

                                  2. 3

                                    I used it for a couple of months last year and it was working fine, a patch for the load balancer was required however and I could only find this information buried in a forum post. I hope they fixed this by now but I don’t know

                                    Didn’t have any issues with their API or cluster upgrades. Went for a self hosted solution using Rancher for now, but mostly for cost reasons.

                                    1. 4

                                      That is a amazingly tacky. I love it.

                                      1. 1

                                        My, erm, friend asks about the keycaps source…

                                        1. 1

                                          respect :D

                                        1. 1

                                          Probably cooking and writing. Nothing too exciting.

                                          I am also gonna figure out how to migrate over my Gemini server to my new dedi and work on a second part to a short story about commercial space travel.

                                          1. 1

                                            The Language Construction Kit and The Color of Magic

                                            1. 1

                                              The Language Construction Kit is a great overview of how languages are put together, and shows science fiction authors having a better grasp of the subject that many of those ‘designing’ their own language.

                                            1. 3

                                              Lua and shell script make for amazing glue languages

                                              1. 3

                                                Fun fact: lead (which pipes used to be made from, hence plumbing) is “luaidhe” in Irish

                                                1. 2

                                                  Same here. Bash -> Lua -> C, where “->” means “if the left operand isn’t enough…”

                                                  1. 2

                                                    same, but any more I tend to favor fennel + shell script

                                                    https://fennel-lang.org/

                                                  1. 2

                                                    I appreciate the reference to Lojban!

                                                    1. 1

                                                      .i mi jbopre .i lo jbopre cu pilno la lojban uicai

                                                    1. 4

                                                      It’s always nice to be able to start again from scratch and make sure all the “cruft” is now reproducible and documented. :)

                                                      Are you using ansible at all or NixOS makes it obsolete?

                                                      Anyway, I always like your posts, thanks for sharing <3

                                                      1. 3

                                                        NixOS absolutely obsoletes Ansible. Plus you don’t need to write yaml.

                                                      1. 3

                                                        Every post I’ve seen from this website is so cringy.

                                                        1. 8

                                                          I disagree, the Nix(OS) related posts have been quite helpful to me.

                                                          1. 4

                                                            Thank you for your feedback. It will be taken into consideration as much as the unique merits of this claim deserve.

                                                            1. 3

                                                              To be fair, this post is marked as “satire”, but yes, it seems low effort. “Learn X in N [time units]” has now become a meme, and I guess it will continue popping up until everyone gets tired of it.

                                                              1. 6

                                                                I was satirizing those posts yes. It feels low effort because i limited myself to 15 minutes for writing it. I even left in the typos.

                                                            1. 0

                                                              It took me 31 seconds, Am I worthy?

                                                              1. 3

                                                                Yes

                                                                1. 2

                                                                  thanks, I needed that

                                                              1. 1

                                                                I must have missed something in your article. Were you running a single-node Kubernetes cluster off a Hetzner dedicated server and now running most of your services as ordinary processes on NixOS?

                                                                1. 3

                                                                  No, I was using Digital Ocean hosted kubernetes and now I have a single hetzner server running services as normal unix processes on NixOS.

                                                                  1. 2

                                                                    Ahh I see. Thank you! I somehow missed this in your write up! Thanks :D I run my own servers at home with a little server room and rack cabinet. 3x node Proxmox VE cluster + 1x 10 3.5 + 4 2.5 NAS. Most of my stuff runs in Docker Swarm clusters (VMs atop Proxmox VE).

                                                                1. 2

                                                                  Until a few years back, I was also running a dedicated Hetzner server. From an availability point of view, this is a bit a source of stress as everything is running on a single server who would get problems from time to time (most common being an hard disk failure, promptly fixed by Hetzner technical team). I am now using several VPS as it gives me redundancy. Sure, you don’t get as many memory and CPU for the same price.

                                                                  1. 4

                                                                    Yeah, I’m aware it’s putting a lot of eggs in one basket, however given most of the important services are either stateless or excessively backed up I’m not practically concerned.

                                                                    1. 2

                                                                      common being an hard disk failure

                                                                      This is why I’m using an KVM host with managed SSD RAID 10 and guaranteed CPU,Memory and Network*. Yeah you will have always some more performance on a bare metal system you own, but I didn’t have to work around a broken disk or system since 2012 on my personal host. I still have enough performance for multiple services and 3 bigger game systems + VoIP. The only downtime I had was for ~1h when the whole node was broken and my system got transferred to another host, but I didn’t had to do anything for it. That way I didn’t have any problems even on the services that need to run 24/7 or people will notice.

                                                                      *And I don’t mean managed server, that’d be far too expensive. Just something like this.

                                                                    1. 2

                                                                      Have you looked at ZFS Datasets for NixOS? I always do something like this on my boxes.

                                                                      Also, as for pool options for SSD boot pools, here’s what I generally use:

                                                                      zpool create -o ashift=13 -o autoexpand=on -o autotrim=on -O canmount=off -O mountpoint=none -O compression=on -O xattr=sa -O acltype=posixacl -O atime=off -O relatime=on -O checksum=fletcher4 tank /dev/disk/by-partuuid/<UUID>
                                                                      

                                                                      Note that ashift=13 will give you good performance for SSDs, and is the only pool option that can’t be changed after the fact.

                                                                      Then I can set the datasets I want to mount (/, /nix, /var, /home, and others) as canmount=on and mountpoint=legacy. Setting up datasets like this will help you ridiculously for backups (check out services.sanoid). Then of course you can do dedicated datasets for containers and such too.

                                                                      Oh, also, get a load of this, which happened on my laptop running a similar ZFS setup while I was working on androidenv and probably had several dozen Android SDKs built in my Nix store:

                                                                      $ nix-collect-garbage -d
                                                                      75031 store paths deleted, 215436.41 MiB freed
                                                                      

                                                                      What’s funny is, after that, I had ~180 GB free on my SSD. Due to ZFS compression of my Nix store, I ended up with more being deleted than could be on my disk…

                                                                      1. 1

                                                                        Would it be a good idea to add that as a cronjob perhaps? What would be the downside?

                                                                        1. 1

                                                                          A normal garbage collection is a great cronjob. The exact command numinit gave deletes old generations, which may be surprising in the worst ways when trying to undo bad configs.

                                                                          1. 1

                                                                            I think you can also set up the Nix daemon to automatically optimize the store. It’s buried in the NixOS options somewhere.

                                                                            1. 1

                                                                              Nice, I didn’t know about that. The setting is nix.gc.automatic, by the looks of it.

                                                                              1. 1

                                                                                “It’s buried in the NixOS options somewhere” is going to be both a blessing and curse of this deployment model >.>

                                                                                Here’s hoping people document their flakes well.

                                                                      1. 1

                                                                        As of yesterday, I now run my website on a dedicated server in the Netherlands. Here is the story of my journey to migrate […] to this new server. […] This server is an AX41 from Hetzner.

                                                                        Am I misunderstanding something here or does Hetzner now have servers in the Netherlands? As far as I understand you did migrate to Hetzner, so now you’re on Hetzner? But Hetzner to my knowledge has servers in Germany and Finland.

                                                                        1. 3