1. 2

    It would be nice to have a standard API or html element or something to make selection uniform across all sites. We could integrate it into the browser settings.

    1. 3

      A request header perhaps…

      The problem is, the sites with the dodgy banners want you to accept their tracking and cookies. It is not in their interest to make opt-out easier.

      1. 4

        Let’s call it Do-Not-Track, but write it as DNT to make it shorter.

    1. 3

      I wasn’t aware iptables could be used to simulate packet loss like this. I would have turned to tc disc or whatever the command is.

      1. 4

        iptables is only useful for very simplistic loss simulation.

        The iptables statistic module has support for random and dropping every nth packet, but real-world packet loss is usually correlated and bursty. tc shines when you need a loss model that more accurately matches what you’d see in the wild (also when you need to simulate delays, duplicate packets, reordered packets, etc.)

        1. 2

          For sure, it would be unusual to see 80% packet loss in the real world. More likely, it would flip-flop between 100% and 0%. I was just testing where the limit is. A bursty packet loss would be an interesting followup experiment.

          Now thinking about the bursty scenario: leaving out the “–retry-delay” and doing the exponential back-off might allow more requests to eventually succeed. But the use case is the blog article is “heartbeat messages to a monitoring service”. Each request tells the server “Checking in – I’m still alive!”. For this use case, if the client experiences severe packet loss, maybe it shouldn’t report itself as healthy. In other words, trying to get the request through at any cost isn’t always the correct thing to do.

      1. 3

        I can totally relate with the “too low energy to work on meaningful stuff” part:

        I’d spend a long day working at my day job and doing a lot of software development there. Then I wouldn’t have the energy to make real progress on my app, so I’d tell myself that upgrading one of the packages that was out-of-date was a good use of my time.

        I think the solution is finding and arranging dedicated time for the sideproject. For example, go from full-time to part-time, and make the side-project your other, equally important “day job”.

        1. 1

          Hello, author of Healthchecks.io here! This is interesting, looks like in certain cases (you are running Prometheus & friends already, and are familiar with setting up the alerting rules) it’s a good option. I’m all for using the building blocks you already have!

          One thing I’m curious about is in what situations people would choose to self-host. My current understanding (a guess, really), is that:

          • homelabbers self-host stuff for the fun of it, and for the learning experience. It’s the whole point of having a homelab
          • large companies might need to self-host because of regulatory requirements and policies (“all data must be under our direct control”, “all data must stay in country X”, “we needed a custom, semi-proprietary feature so we are running a patched fork” etc.)

          But for smaller teams and companies with no special requirements, what would motivate self-hosting? Quite a bit of engineering time goes into setting up and maintaining a production-grade service. With SaaS, this cost would be amortized across many customers. I’m thinking $16/mo should be a non-consequential expense for a company. Even a couple hours of saved engineer’s time per month should be more valuable than that. Am I wrong in thinking that?

          1. 1

            Can only speak for myself, but a 3rd scenario might be bootstrapped startups (which might be over-represented in forums like this). I agree that $16/mo is non-consequential for a ‘normal’ company. But I try to keep recurring costs down as much as possible, as those add up, and every few-hundred bucks I can save count. So I accept that I won’t have as reliable as possible monitoring, but since I’m only starting out, and working on this full-time (which means I usually spot issues fairly quickly anyway, and the consequences are not as bad as they would be otherwise), and I have other (self-hosted) infrastructure anyway around my product, it’s an acceptable trade-off.

            Should I ever achieve a reliable income stream, I’d most likely go for a hosted service instead. Maybe not for all of my self-hosted stuff. But monitoring would be one of the first things I’d outsource I think.

          1. 2

            Would it be feasible to add DNS-over-TLS service directly to pihole?

            (haven’t used pihole, but have toyed with the idea, specifically to block ads in Android)

            1. 5

              The more often you deploy code the more comfortable you get with your tools and your process. It’s better if you are in a position to push code changes at any time, and have confidence that the process won’t leave your app in a half-broken state.

              Another thing that takes some pressure off is a good unit test coverage. Passing tests don’t guarantee everything will work perfectly, of course, but at least I can have some level of confidence that there are no big obvious blunders in the code I’m shipping.

              1. 2

                Continuous Deployment (CD) also has a nice feedback loop with tests.

                Without CD, developers become more and more careful about deploying, until it becomes impossible to deploy without regressions.

                With CD, the solution is to write more/better tests