Threads for mrkaran

    1. 2

      I think there’s a small typo in the output from the first code sample. The output of the second log line should be this:

      time=2009-11-10T23:00:00.000Z level=ERROR msg="something went wrong" err="file does not exist" file=/tmp/abc.txt
      

      More substantively, there seems to be some magic here.

      log.Error("something went wrong", fakeErr, "file", "/tmp/abc.txt")
      

      The first input is given the key msg, and the error is given the key err, but the last two are explicitly key=value. I’m sure I can get used to the rules here, but what’s the rationale about this? I don’t have a lot of experience with structured logging, but go-kit/log, which I’ve used, seems to be more explicit. My immediate reaction is that I prefer more explicit and less magic inference.

      1. 5

        That’s been a pretty big point of contention. The current signature uses reflection for the key/value pair. https://pkg.go.dev/golang.org/x/exp/slog#Logger.Info

        func (l *Logger) Info(msg string, args ...any)
        

        There’s an additional LogAttr method that may be used if more performance is required: https://pkg.go.dev/golang.org/x/exp/slog#Logger.LogAttrs

        func (l *Logger) LogAttrs(level Level, msg string, attrs ...Attr)
        

        A lot of people would prefer

        func (l *Logger) Info(msg string, attrs ...Attr)
        

        Which would be invoked like this:

        log.Info("something happened", slog.Int("key", 123), slog.String("file", "/tmp/abc.txt"))
        
        1. 1

          Thanks for giving me the background. As my other comments make clear, I am definitely on team explicit. Either way, I’ll look into slog since it’s likely to become a standard choice.

      2. 2

        log.Error has a slot for errors in its signature as a convenience. You can also do log.Log(slog.LevelError, "something went wrong", "err", fakeErr, "file", "/tmp/abc.txt").

        1. 2

          Got it. I’d probably need to write more to be sure, but at first glance I prefer the explicit style.

          Currently, using go-kit/log, I have code that looks like this, and I appreciate how explicit it is:

          level.Error(logger).Log(
          	"file", link.File,
          	"URL", link.URL,
          	"status", link.Status,
          	"error", link.Err,
          )
          
          1. 2

            There’s a proof of concept adaptor so you can keep using the go-kit interface but have it send records out to slog:

            https://github.com/jba/slog/blob/main/gokit/gokit.go

      3. 2

        Thanks for pointing out the typo, fixed!

        Regarding being explicit - I agree with you. But here, I guess for a better DX, error is a required argument only when you call .Error(): https://pkg.go.dev/golang.org/x/exp/slog#Logger.Error

        I guess I prefer this because error key is a standardized field and the function argument itself expecting error only for .Error() is a fair design IMHO. Otherwise, you could also have an inconsistent output format in a large codebase where some code could be using err, instead of error etc.

    2. 9

      Nice article.

      I know that this is an area that could use improvement; it’s quite manual right now. I’d be interested to hear about some kind of light-weight solution for this that people have come up with.

      You can look at a simple docker-compose setup as managing 67 containers just with docker run can get messy. I’ll say if you want to get a little more sophisticated, you can look at writing a simple ansible playbook that will template out different docker-compose.yml files on the server and use that to manage stuff like image updates, config changes etc.

      1. 10

        To most people, Docker and Ansible are anything but simple.

        1. 6

          Of course it’s relative… to most people, running even a single web app/site is anything but simple.

          But obviously if you can read the article and understand what’s going on, then OP’s suggestion of using docker compose and/or ansible makes sense.

        2. 6

          I say this completely seriously and not intending to throw any shade at the OP. Over the past year I have tried to recognize how and where I use words like “simply” or “just”, particularly when it applies to technical instructions, and kill them off for exactly that reason.

          What I find simple someone else might not. I might find it simple because I have lived it for 10 years and not realize my bias. The person I’m speaking to or writing for might read those words (as you’ve noted) and think “That’s not simple, I can’t do this.” To some, words like those come off as arrogant (even if not intended that way).

          Instead, I now try to explain things in a simple way, if I think it is simple, or give clear examples that step the reader through it. If it really is simple, they can skip the step-by-step and if it isn’t for that reader, they have what they need to start accumulating the experience needed for it to seem simple the next time.

        3. 1

          Sure, but what are the alternatives, genuinely?

          1. 4

            Pyinfra is a great replacement for Ansible for these types of tasks. Nomad would also be a great way to orchestrate some containers with little fuss. So would be Docker Swarm, in my opinion.

      2. 4

        I tried using just docker-compose with a similar project last year but I found it required a fair amount of maintenance to keep up during deploys.

        I’m considering redoing it using Harbormaster (compose file watcher) so that updating the source replicates to the servers.

        https://gitlab.com/stavros/harbormaster

      3. 3

        I’m using systemd-docker at the moment to run docker containers in systemd (at work, in my personal life I don’t like docker). I find this very nice, because then I have all services in a standard systemd environment and can fully use dependencies between systemd units and so on. If I remember correctly, systemd also has some built-in container runtime (nspawn?).

    3. 5

      Learning org-mode. Seems to be really fun and I am enjoying by taking things slow.

    4. 2

      I remember seeing Vector a while back but forgot about it. It seems very handy for having just one utility which can combine metrics collection/generation, like what Telegraf does, and logs collection, like Logstash/Filebeat/… does.

      Are there any other tools out there which can handle both cases which are worth looking at? A use case I can think of right now is to use in a sidecar container in a Kubernetes pod running Nginx, for collecting logs and generating metrics from them at the same time.

      1. 3

        I recently found about https://www.benthos.dev which also seems pretty similar to what Vector achieves. It’s written in Golang (Vector is in Rust).

        But for your usecase, I think Vector can do the job, out of the box.

        1. 1

          Thanks! I’ll check it out.

          1. 1

            It turns out Vector and Benthos share a developer, who gave some background into what the priorities are for each of them here: https://github.com/Jeffail/benthos/issues/359#issuecomment-573438855.

    5. 1

      If you haven’t tried Joplin in a while, consider giving it a second look. I am very impressed with the progress they’ve made. Thanks for the reminder!

      1. 1

        Was that for me? Cause the whole article is on Joplin, lol.

    6. 3

      Goof stuff. There’s also https://github.com/VictoriaMetrics/metrics lib that I suggest to people who want to add metrics exposition to their Go apps but don’t want to include a lot of dependencies from the official Prometheus client library.

      1. 3

        Strong -1 — everything written by that guy is sloppy and full of caveats. For example, the Histogram type in this package uses dynamic buckets, making any kind of aggregation — an average, a rate over any dimension, etc. — totally statistically invalid.

      2. 9

        There’s quite some big enterprise businesses that most likely you and everyone here rely on on a regular basis that completely base on Nomad.

        9/10 times the error won’t be helpful. In my experience that’s not true. Something that bothers me about Kubernetes is that it’s rather easy to create silent problems. In fact that’s something I hunt down a lot in Kubernetes setups and it’s becoming a skill, but it certainly speaks against Kubernetes. Looking at a clearly non functional system where everything lights up green can be frustrating. Don’t get me wrong, there were clear mistakes behind those and those weren’t bugs in Kubernetes. The reality still is that this is a typical problem in Kubernetes setups and not as common in Nomad setups. There’s various reasons for that.

        I’ve had the chance to work with both Kubernetes and Nomad setups in big production setups. Both work. But due to complexity you are fast more likely to be the first one to be the first to have an issue on kubernetes. Well maybe other than Amazon or Google, but that doesn’t matter cause again due to complexities of not just Kubernetes itself, but Operators once things fail they tend to fail badly.

        Another reason for not recommending Kubernetes for big enterprise setups is stability. Kubernetes for better or for worse develops rather quickly and has a quite big amount of breaking changes. While in a small start-up that’s something you can have done after a bit of work (and don’t underestimate that) in a bigger setting I have seen more than one company significantly falling behind on kubernetes versions.

        With the rise of operators I see that problem only growing.

        Most of the mentioned problems don’t exist with Nomad. Sure there’s other things that Kubernetes does better, but based on the last few years of experience if someone would ask me for a recommendation using the current state of both Kubernetes and and Nomad I’d strongly suggest Nomad. Outside of cloud providers I also know of bigger Nomad than Kubernetes setups.

        This is something that changed though and wasn’t always so. While I have been interested in both for quite some time Nomad used to have problems which were fixed.

        Also please don’t get me wrong. I’m not arguing against the operator pattern as a whole. I think however that the current implementation has a severe problemn due to people adding very custom forms of complexity at least making onboarding and long term maintenance a topic one should not underestimate.

        Since Nomad I’d very thin compared to Kubernetes I’m really curious about those errors that one will be the first to encounter. Did you have any experience? I ran into Nomad bugs before but I clearly wasn’t the first to encounter them because they usually were already fixed in the upcoming release and migration was trivial.

        I also ran into multiple Kubernetes bugs which also were fixed. However there were breaking changes in the way.

        These are however individual day to day problems, so certainly doesn’t say anything about either software.

        1. 2

          Would love to hear some horror stories. Infrastructure in our company has gotten complex enough that I’m now facing the decision of either moving most of the services that can be easily containerized into a Kubernetes cluster or using Nomad to bring some consistency to the deployment and orchestration stories. I’ve started a migration to Kubernetes once before and the tools blew chunks, so Nomad’s simple concepts and seemingly straightforward conversion path seems very tempting.

          I’d love to hear what’s your take on running the Nomad controllers themselves and general pitfalls that might help me make a decision (in general, Kubernetes’ most enticing feature is I don’t have to run the cluster).

          1. 4

            Reading that article ^^, would you have guessed gpu passthrough was a part of the nomad (host) agent configuration?

            Yes, and it sounds like a pretty reasonable guess given that document presents both the agent configuration and the example job spec making use of it. k8s gpu support also requires a driver deployed to all gpu nodes in addition to the pod spec using it, so your assumption seems like it might be based on someone else having already configured this in your k8s experience.

            Given the nature of the horror stories (as such), other comments on this post, and the labeling of apparently any nomad deployment as “bespoke” – almost satire given an environment without CRDs, operators, and dozens of vendor supplied distributions – I’m struggling to see the technical side of your axe to grind (popularity/sales issues, difficulty with documentation, trouble hiring qualified ops)

            1. 6

              Sorry, I don’t get how the second story is an inherent Nomad issue. Could you please elaborate? To me it seems that the customer clearly didn’t know what they want. The same problem would be with any system, even if started with k8s migrating in the end to Nomad.

            2. 5

              Without knowing details, #2 sounds more like a dumbass customer than nomad horror story.

              Then again, if k8s can shield us from dumbass customers, it has much value indeed.

            3. 2

              Can I hire folks with shared experience in my tech stack? If my tech stack needs to be replicated (think Gitlab, ELK, PaaS private cloud offerings), could others do so easily?

              This one hits home. For one, if I decide to go with Nomad I will end up doing most of the DevOps myself because nobody else will be familiar with it (at least someone else in the team is familiar with Kubernetes). Then there’s the requirement of having to launch regional clusters if/when we start working with European customers or one of our customers decides to pay us a ton of money to do an on-premise installation.

              Damn, this alone might be enough to tilt the scales in favor of Kubernetes, despite my complete hatred for managing state using YAML files.

              But do keep going, I think it’ll be good for other people as well!

              1. 7

                if I decide to go with Nomad I will end up doing most of the DevOps myself because nobody else will be familiar with it

                K8s is big because everyone is jumping the hype train without ever questioning do they need a full blown K8s cluster or not. Nomad has a minimal learning curve and is pretty easy to get started with. To put differently, yes you can find engineers who know how to deploy on K8s, can you also find skilled enough engineers who can debug an obscure issue on K8s under the hood and patch it? Cause those engineers, would be awesome anyway regardless of Nomad/K8s.

                This is a red flag for me for any company hiring if they limit their choices of people to employ based on frameworks/tools. What’s cool today, may not even exist tomorrow.

      3. 2

        Did you also use k8s in production and could comment on that? I’m just wondering if both are horrible (everything is horrible), or just one?

      4. 4

        Have you opened the issues and discussed with the maintainers? Cause, for memory limits I’ve tried and they do work (on OSS edition). Regarding others, they aren’t descriptive enough for me to comment on. Which version of Nomad were you running and yes, were any of the issues you faced reported on their issue tracker?

        1. 2

          The docs look outdated. Quoting from https://www.nomadproject.io/docs/commands/namespace:

          Namespaces are open source in Nomad 1.0. Namespaces were Enterprise-only when introduced in Nomad 0.7.

          Shall open an issue to fix the tutorial website.

          Regarding Quotas on Namespace, yes that seems to be Enterprise only https://www.nomadproject.io/docs/commands/quota

          Quota commands are new in Nomad 0.7 and are only available with Nomad Enterprise.

          But I believe the task will get OOM Killed if the memory usage exceeds the one defined in https://www.nomadproject.io/docs/job-specification/resources#memory-1

          Shall try this, thanks!

          1. 4

            I have the exact opposite experience, memory limits are hard but CPU limits are the minimum you want to allocate to the task but it can burst over if there’s resources.

            Nomad in production since 2018 and still using it..

            1. 2

              memory limits are hard but CPU limits are the minimum you want to allocate to the task but it can burst over if there’s resources

              Yep. Same from my experience.

          2. 2

            It won’t. OSS nomad will not enforce cgroup memory limits

            Do you have a source for any of this? Were you maybe using the raw_exec driver rather than exec, or booting without cgroup_enable=memory swapaccount=1 (also a thing for k8s)? There’s no task driver resource limit feature related to enterprise, and this is contrary to other folks experience, so the insistence is beginning to sound like FUD

            1. 1

              I have over two years worth of OOM killed services in both staging and production. I don’t quite follow how my experience could be seen as FUD.

              1. 1

                I was referring to the comment that limits don’t work. I very much agree with you and, thankfully rarely, see OOMs too w/ 5y+ in production

                1. 1

                  Oh, sorry. I thought it was the OP (delux), my bad. I was on my phone

          3. 2

            I think they are 2 different things. Quotas apply on entire Namespace. The individual memory limits of task are still on the resources section. Shall confirm it anyway

  • 13

    Excellent write-up! I’ve given a talk on a number of occasions about why Nomad is better than Kubernetes, as I too can’t think of any situations (other than the operator one you mention), where I think kubernetes is a better fit.

    1. 2

      Hey, yes I’ve definitely seen your talk :D Thanks for the feedback!

    2. 2

      Watched your talk and have some points of disagreement:

      • YAML is criticized extensively in the talk (with a reason, it’s painful) as being an inherent part of Kubernetes, when reality is that it’s optional, as you can use JSON too. And, most importanty, as you can use JSON in k8s definitions, anything that outputs JSON can work as a configuration language. You’re not tied to YAML in k8s, the results you can get with stuff like Jsonnet is way superior to plain YAML here.
      • I don’t think that comparing k8s to Nomad is entirely fair, as they are tools designed with different purposes. Kubernetes is oriented to fixing all the quirks of having networked cooperative systems. Nomad is way more generic and it only solves the workload orchestration part of the equation. As you well explained in the talk, you have to provide your own missing pieces to make it work for your specific use case. In a similar (and intended) example, there are many minimalistic init systems for Linux that give you total freedom and recombination… but Systemd has it’s specific use cases in which it makes sense and just works. UNIX philosophy isn’t a silver bullet, some times having multiple functionalities tied together in a single package makes sense for solving specific, common problems efficiently.
      • About the complexity of running a Kubernetes cluster: True, k8s as it is, is a PITA to administrate and it’s WAY better to externalize it to any cloud provider, but there are projects like the one mentioned in the article, k3s.io, that simplifies a lot the management.

      One thing we can agree 100% is that neither Kubernetes or Nomad should be the default tool for solving any problem, and that we should prefer solutions that are simpler, easier to reason about.

    3. 1

      I think you accidentally the link to the talk.

      1. 1

        Fixed, thanks :)

  • 3

    Nice writeup, thanks! I’m also thinking of taking the Nomad route, as I would like to stay away from k8s as far as possible.

    One question related to VPN? Is that for connecting to internal services when traveling, or you use VPN even at home? If the latter, what is the benefit? I’m trying to understand do I need one if all my machines are at home.

    1. 3

      Actually Tailscale is a mesh network. So only the traffic to IP ranges under Tailscale CIDR flows via Tailscale.

      And yes I use them at home also. My server is a DO droplet, so practically no difference if I’m at home or travelling. But even at home, I’ve 2 RPi nodes and I just prefer to host anything internal on Tailscale ranges (so that any new device gets the access automatically, don’t have to fiddle with local IPs or local DNS resolvers). Makes the setup pretty clean :)

  • 1

    There’s https://frozen-lobster.rohanverma.net/ too which my friend has developed. It archives the top posts for the day.

    Similar to https://www.daemonology.net/hn-daily/ which is for HN.

  • 3

    I use https://joplinapp.org/ synced to my Nextcloud instance (WebDAV). Joplin is super awesome, it is quick, search is great, writing and editing notes in Markdown is natural for me. I’ve used Notion in the past and found that it’s slow, the different components/block system actually gets in my way of composing notes. And then, Joplin allows me to take a backup of my data seamlessly.

    With the tool out of the sight, it’s really important to figure out a workflow as well for yourself. There’s no point in finding a good tool and not using it enough (or just forgetting about it in the next few days). Here are few ways I use Joplin:

    For Knowledge Management:

    • Have 2 Notebooks: Work/Personal. (self explanatory)
    • Inside Work, I have multiple sub-notebooks which touch broadly each category of the stuff I do (eg Golang/Ops)
    • I’ve a Scratchpad sub notebook in both Work/Personal and this is actually where I spend most of my time on. During the day or while doing the task, I make it a point to just log down whatever I did to not forget it later (if I think it’s worth saving for future) and not particularly care much about grammar, formatting etc. The idea is to log down as soon and go back to what you were doing. Depending on the workload during the week, I take the Scratchpad and move the entries to their correct categories and format them nicely. This usually happens twice a week (Wed and Sat) but no fixed rule. The idea is inspired from “Inbox Zero”, so at the end of every week I aim to have a clean scratchpad and whatever I’ve learnt during the week goes in correct categories.

    For Bookmarks: One Notebook with multiple sub notebooks with categories like:

    • Articles
    • UI Inspiration
    • Tech Talks
    • etc..

    I use Joplin Web Clipper Extension which allows me to save the entire link (as HTML or just URLs) in these notebooks. Each new entry is a new note, that allows me to also take short notes on that particular URL later (like few notes after watching a tech talk) etc.

    I heavily use Tags in all my notebooks, which allows me to have a unified view of different kind of stuff I’ve. For example “golang” tag in my Work notes and Personal notes, allows me see all the “golang” stuff together in one place.

    This system isn’t ideal/perfect or it may not suit you as well. And I didn’t reach at this workflow from day 1, took me many iterations and experimenting with different tools until I settled on this. And now I think I’m fairly satisfied with this approach. Joplin is <3

    1. 3

      Just came here to praise Joplin!

    2. 2

      +1 for Joplin. It has honestly been a Warp Speed productivity boost for my learning and retention.

      Realized I should at least try and add some value :)

      I make heavy use of both notebooks and tags. So I have Tech, Househoud, Gaming notebooks, and about a bazillion tags for every possible attribute, but I can at least restrict my search to the correct sphere, which is especially useful if I know I want a particular thing but can’t successfully retrieve it using a tag search.

  • 1

    Can anyone share what’s the difference between Netlify and Vercel?

    1. 1

      They seem roughly equivalent to me. There’s probably a few features than one has over the other, but that comes down to nuance, and if there’s something specific you need to do.

  • 2

    This post hits home hard. I setup a personal K8s setup for exactly the same reasons, to learn by doing.

    However, the maintainence became a pain point when I had to deploy Statefulset apps and K3s on ARM didn’t really have Longhorn support. Which was a deal breaker for my cluster setup.

    I shifted it gradually to a single node $10 DO instance, managing all services in Docker containers, configs/DNS via Terraform. The most beautiful part of the stack is automatic SSL from Caddy, it just works out of the box.

    I’m planning to revamp the docs and add a module to help me with deduplicating the configs, but if anyone’s interested: https://github.com/mr-karan/hydra/tree/master/floyd/terraform

  • 1

    Windows release doesn’t seem to work…

    PS C:\Users\zach>  doggo -q mrkaran.dev -t MX -n 1.1.1.1 --debug
    time="2020-12-19T13:25:08+10:00" level=debug msg="initiating UDP resolver"
    time="2020-12-19T13:25:08+10:00" level=debug msg="Starting doggo 🐶"
    time="2020-12-19T13:25:08+10:00" level=debug msg="Attmepting to resolve" domain=mrkaran.dev ndots=0
    time="2020-12-19T13:25:08+10:00" level=error msg="error looking up DNS records" error="dns: domain must be fully qualified"
    
    1. 3

      Oops. I realised the issue. My bad. Will fix it soon.

      P.S. Fixed it with https://github.com/mr-karan/doggo/commit/8d1b6ad9fa205675b86818f0affccd28d2256686.

      You can try v0.1.1 now :)

      1. 1

        🤌 perfecto!

        PS C:\Users\zach> doggo -q mrkaran.dev -t MX -n 1.1.1.1
        NAME            TYPE    CLASS   TTL     ADDRESS                         NAMESERVER
        mrkaran.dev.    MX      IN      291s    10                              1.1.1.1:53
                                                in1-smtp.messagingengine.com.
        mrkaran.dev.    MX      IN      291s    20                              1.1.1.1:53
                                                in2-smtp.messagingengine.com.
        
  • 4

    The optional -- stops option parsing i.e. allowing arguments to start with -, in the usage this should come after options and before arguments.

    1. 2

      Sorry. I didn’t quite grasp that. You mean in the help text?

      1. 2

        Yes in the help screenshot: https://github.com/mr-karan/doggo/blob/main/www/static/help.png

        doggo [query options] [--] [arguments...] would be more correct, the other types of arguments are also not listed.

  • 8

    Looks great!

    It’s totally inspired from dog which is written in Rust. I wanted to add some features to it but since I don’t know Rust, I found it as a nice oppurtunity to experiment with writing a DNS Client from scratch in Go myself.

    On the other hand, I know Rust. What are missing features you added? Sounds like a nice opportunity for me to exercise my Rust!

    1. 5
      • I work with K8s a lot and for me “ndots” and “search lists” are two parameters that I play around with a lot when debugging issues. I’ve to resort to dig or nslookup for it but that’s something dog can support :)

      • Few minor things like only sending IPv4 traffic. Or showing which nameserver was used. (Although there’s an issue opened on dog, for the same last I checked).

      • Even my tool doesn’t support DNSSec right now, but it’d be great to add that in dog as well. I’m planning to work on it for doggo as the primary feature for the next release.

      1. 5

        You successfully dared me to implement search list for dog. Let’s see how it goes.

  • 8

    Was excited about the project and then saw this. When are we going to stop abusing the name open source? If it’s not an OSI approved license, it’s not OSS in its true essence. /end-rant.

    1. 6

      I agree that it was misleading of the original poster to mention “open source” in their post while not mentioning that Meli’s license is the Business Source License, which is not an open source license (as the license itself proclaims).

      However, I don’t think “OSI-approved” is the ideal criterion for open source software. To give a counter-example, I think that the non-OSI-approved Blue Oak Model License is as much open source “in its true essence” as the OSI-approved MIT License is. For more examples of “OSI-approved” falling short as a standard, see the blog post Don’t Rely on OSI Approval, written by one of the lawyers who wrote the Blue Oak Model License.

      1. 2

        While I agree with you that OSI-Approved isn’t a “standard” or a definitive list, however I am yet to see a legal precedent of Blue Oak Model License or similar licenses standing in courts.

    2. 6

      I don’t think the abuse is using an unapproved license. The abuse is that the license doesn’t meet the criteria for open source distribution. Approval is useful for other reasons but not necessary to meet the definition.

      I don’t think this license is bad. It certainly won’t stop me from considering usage of this tool.

      But IMO they should choose a different term (“source available”? “shared source”? “will be open source 4 years from now”?) to use when promoting the project.

      Calling it “Open Source” makes people believe distribution meets the OSD conditions and this says very plainly in the license text that it doesn’t:

      The Business Source License (this document, or the “License”) is not an Open Source license.

      So it’s easy to understand your disappointment after seeing it promoted as Open Source.

  • 6

    I’m curious how does dockershim being removed from K8s leads to a conclusion that Docker Inc as a company is dying? As explained by many, Kubernetes team took that step to remove the bloat which was created by Docker in the codebase. But do you think people will go back to stop using docker CLI altogether and write 10 lines of bash script to spin up a new container, network etc? docker run is a UX layer on those containerd commands and I don’t see why people will stop using it just because K8s decided to remove the “dockershim” module. And how any of this has an affect on Docker Inc, that I’m still unable to understand AFAIK docker the CLI is open source and obv doesn’t generate any revenues for Docker Inc (which is what matters when we are talking about a company!)

    1. 5

      I think the reason that it points that direction is that there are multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

      Personally I bit the bullet was basically forced to switch to podman/buildah due to docker straight up not supporting Fedora 32+ due to the kernel change to cgroups v2. Docker Desktop for Mac/Windows is a nice product for running containers on those OS’ but my guess is that is the only place it will stay relevant. It’s easy enough to have a docker-compatible cli aliased to docker that doesn’t require the daemon on linux etc.

      Also, with their attempts at monetizing DockerHub it kind of paints a “failing” aura over the company. If they can’t make money off of DockerHub how can they monetize a daemon that runs containers when there are many other equivalent solutions?

      1. 1

        multiple k8s providers and installers that default to the docker runtime (digitalocean managed k8s uses docker as the runtime, and kubespray defaults to it as well but also supports CRI-O). With dockershim going away where else will docker be used other than developers’ desktops?

        SImilarly microk8s, k3s are using containerd since forever.

        With dockershim going away where else will docker be used other than developers’ desktops?

        Yep, exactly. It will be used by end developers just the way it is right now. I understand there are more lightweight alternatives for building images (esp something which doesn’t require you to run a local daemon) that are more lucrative. But not everyone runs K8s and I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose :)

        1. 2

          I think there’s a large market out there for people running standard installations of their software just with docker/docker-compose

          This is extremely true. I have many friends/colleagues who use docker-compose every day and there is no replacement for it yet without running some monster of a container orchestration system (compared to compose at least).

          I guess my main worry is that docker is a company based on products which they are having an extremely hard time monetizing (especially after they spun off their Docker EE division). I don’t see much of a future for Docker (the company) even if loads of developers use it on their desktops.

          1. 2

            docker compose was based on an acquihire of the folks that made fig.sh, then very little ever happened feature-wise. Super useful tool and if they’d been able to make it seamless with deployment (which is very hard it seems) the store might’ve been different.

        1. 1

          Yep, I appreciate that they finally made it available for Fedora 32 (after having to tweak kernel args), but many of us already switched to alternatives.

          They still don’t ship repos for Fedora 33 (the current release). After checking the GitHub issue related to supporting Fedora 33 it appears the repo is now live, even though it only contains containerd.

  • 10

    For static website you can just S3 + cloudfront without needing a compute service. Probably a lot cheaper also

    1. 3

      Less effort and cheaper. Unless a Rube Goldberg award is your goal. ;)

    2. 3

      Well, learning is fun, but this tip reminds me that I should get my tech blog up running again. Using S3 & Cloudfront, probably.

      1. 3

        That’s the great thing about static websites: there are so many possible options for building and hosting them.

        On the subject of containers, I was pleasantly surprised by Netlify’s approach. It will happily spawn containers for you and let you run any custom script in them. The only fixed part is a TOML file where you tell it what to run and what directory to deploy. How the rest of the build process works is up to you.

        [build]
          publish = "build/"
          command = "./netlify.sh"
        

        The only annoying part is that it only offers a Ubuntu 16 image.

    3. 2

      Or GitHub Pages/Netlify. Completely free.

      1. 2

        True. Both github and gitlab pages are free options that you can just slap a domain on top.

      2. 1

        And using existing workflows like peaceiris/actions-gh-pages makes that even easier.

        This is what I do for my own website, which is built off this template repository. Click the green button “Use this template”, and you got a static site up and running within a minute.

    4. 2

      You probably need Route53 as well.

      The monthly cost of one of my low traffic webpages is:

      • Domain name: 1,26$
      • AWS Route53: 0.5€
      • AWS S3: 0.01$
      • AWS Cloudfront: 0.01$

      Sum: 1,78$. Most of it is domain costs.

      I didn’t do much posting this year, so not much S3/Cloudfront costs arose. When I was posting more often, and had to invalidate cache multiple times because corrections/multiple publications in a spree/testing robots processing RSS feed, then sometimes the combined S3+Cloudfront cost reached over 0.1$!

      Also this setup scales basically infinitely (but then costs also rise), won’t be slashdotted, unlike nginx running on a potato tier VM.

    5. 1

      If you’d like to row against Big *aaS but prefer the pricing model, I’ve been pretty happy with hosting many of my simple (static and dynamic) sites at nearlyfreespeech.net (for over a decade now, I guess!)

      (Not being a purist, here; I use Big *aaS in my personal infra where it makes sense.)

  • 4

    I disabled all notifications on my phone pretty recently (~1 month) and that too after watching Social Dilemma but I’m glad I did. I’ve opened my phone less often, I’m less anxious and I feel better in control of my daily life. I never thought it would have such an impact but I guess you need to try it for yourself.