1. 9

    tldr: There’s a list at the end of the article saying which clients will not support new CA. If those matter to you, you’re in trouble.

    Keep everything up to date and renew certificates.

    1. 2

      I’ve seen myself looking at nowhere and correlating politics and social structures with arquitectural problems in software and how fixing one could translate to the other.

      Or seeing how other mundane stuff translates to a technical concept just because it’s called similarly.

      Can’t come up with an example, but 100% this thing happens to me a lot.

      1. 42

        Of course it requires apt! Because not only we all run Linux, be we all run a specific distribution of Linux with a specific package manager.

        I feel like we’re just skipping over the elephant in the makefile here. Why the hell is a Makefile running apt install in the first place?

        1. 3

          Why not? It’s ensuring the dependencies are in place.

          1. 30

            It’s rather unconventional for the build step to be doing systems-wide changes. Having an additional recipe, e.g. make deps, which handled installing the dependencies and could be optionally ran would be reasonable, but historically and conventionally make by itself handles the build step for the application, and just the build step.

            1. 8

              In this particular case, a lot of the “build dependencies” aren’t even that: they’re cross-build dependencies, e.g. if you want to create an ARM binary on your amd64 machine. You probably don’t want to do that when you’re compiling it for just yourself though.

              I’m not sure if any of those dependencies are needed actually, it installs the SQLite3 headers, but the go-sqlite3 package already includes these and shouldn’t be needed. The only thing you should need is a C compiler (which I assume is included in the build-essential package?)

              None of this may be immediately obvious if you’re not very familiar with Go and/or compiling things in general; it’s adding a lot of needless friction.

              That Makefile does a number of other weird things: it runs the clean target on the build for example, which deletes far more than you’d might expect such as DB, config, log files, and a number of other things. That, much more than the apt-get usage, seems a big 😬 to me. It can really destroy people’s data.

              1. 2

                Having an additional recipe, e.g. make deps, which handled installing the dependencies and could be optionally ran would be reasonable

                That’s what I meant, it’s a reasonable way of running apt within make. Didn’t mean as a default procedure when running make.

                EDIT: I know that historically and conventionally make doesn’t do that, but, you know, it’s two lines, it’s about getting dependencies required for building… I don’t think it’s that much of a flex.

                1. 3

                  Oh yeah, definitely. If it’s there just not the default, that’s great and I’d totally +1 it. It’s handy!

                  Just please no system-wide changes by running just make :(

              2. 3

                It only does that on one particular flavor of Linux. Even if we ignore BSDs, not everyone is Debian-derived.

                1. 1

                  Is it typically the job of a Makefile to evaluate dependencies (maybe) and install them (maybe not typically)?

              1. 11

                I can understand people assuming apt exists on the system because:

                • Most of the times they’ll be correct
                • People that doesn’t have apt will probably know how to find the equivalent packages in their equally bad linux package manager of choice.

                Can understand, too, people using a SQLite implementation for Go that doesn’t depend on CGo, because CGo has it’s own issues.

                Everything is hot garbage, doesn’t matter if you’re in Ubuntu or not. Don’t expect to have a gazillion of scripts that install all the dependencies in every package manager imaginable, none of those is good enough to deserve that much of attention, it won’t happen. At least apt is popular.

                That’s a reason Docker is so popular: It’s easier to work over an image with an specific package manager that will not change between machines. Doesn’t matter the distribution or the equally bad linux package manager of choice as long as you are on Linux and have Docker. And Dockerfiles end up being a great resource to know all the required quirks that allow the code to work.

                And finally:

                First, please stop putting your binaries in /app, please, pretty-please? We have /usr/local/bin/ for that.

                Never. Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?

                1. 10

                  Linux standard paths are a tragedy and will actively avoid them as much as possible. It’s about choices and avoiding monoculture, right?

                  No, it’s about writing software that nicely integrates with the rest of the chosen deployment method/system, not sticking random files all over the user’s machine.

                  1. 22

                    In this example /app is being used inside the docker image. It is most definitely not sticking random files all over the users machine.

                    1. 4

                      This hits a slightly wider point with the article - half the things the author is complaining about aren’t actually to do with Docker, despite the title.

                      The ones that are part of the Docker build… don’t necessarily matter? Because their effects are limited to the Docker image, which you can simply delete or rebuild without affecting any other part of your system.

                      I understand the author’s frustration - I’ve been through trying to compile things on systems the software wasn’t tested against, it’s a pain and it would be nice if everything was portable and cross-compatible. But I think it’s always been a reality of software maintenance that this is a difficult problem.

                      In fact, Docker could be seen as an attempt to solve the exact problem the author is complaining about, in a way which works for a reasonable number of people. It won’t work for everyone and there are reasons not to like it or use it, but I’d prefer to give people the benefit of the doubt instead of ranting.

                      Speaking of ranting, this comment’s getting long - but despite not really liking the tone of the article, credit to the author for raising issues and doing the productive work as well. That’s always appreciated.

                      1. 3

                        OP here. aww thank you! Yes as noted in the disclaimer at the top, I was very frustrated! hopefully I ported it all, now trying to clean up some code so I can make patches.

                        According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?

                        As a open-source maintainer, I like only two types of issues. 1) here’s a bug, 2) here’s a feature request and how to implement. But if someone made an issue saying “Your code is not running on latest version of QNX”, I would rather see them “Here’s a patch that makes the code run on QNX”.

                        Regardless, I tried an experiment and opened a “discussion issue” in one of the tools, hoping for the best.

                        1. 3

                          According to commenters on the Yellow Site, it’s not wise to “just send a patch” or “rant”, they way it’s better to open an issue first. Which honestly I still don’t understand. Care someone explain that to me?

                          Receiving patches without prior discussion on the scope/goals is potentially something frustrating since basic communication can easily avoid unnecessary extra work for both maintainers but also contributors. Maybe a feature is already being worked on? Maybe they’ve had prior conversations on the topic that you couldn’t have seen? Maybe they simply don’t have the time to review things at the moment? Or maybe they won’t be able to maintain a certain contribution?

                          Also for end-users, patches without a linked issue can be a source of frustration. Usually the MR contains discussion on the code/implementation details and issues conversation around the goals of the implementation.

                          Of course that always depends, if you’re only contributing minor improvements/changes - a discussion is often not needed.

                          Or in other words, as posted on ycombinator news:

                          Sending patches directly is a non-collaborative approach to open source. Issues are made for discussion, and PRs for resolutions; as a matter of fact, some projects state this explicitly, in order to waste maintainers’ time with unproductive PRs.

                      2. 3

                        Exactly

                        1. 2

                          This is only a mediocre example, because with go there should only be one binary (or a handful of them) - but yes, if you put your software in a container, I am very happy if everything is in /app and if I want to have a look I don’t have to dissect /etc/, /usr/local and maybe /var. If the container is the sole point of packaging one app, I see no downside to ignoring the FHS and putting it all together. There’s a reason that most people do that for custom-built software (as opposed to “this container just does “apt-get install postgresql-server”, then I would expect the stuff to be there where the distro puts it)

                    1. 2

                      Isn’t this experience nearly the same with every SaaS that allows some degree of customization? In all those, customization is an afterthought, and truly painful to work with.

                      1. 5

                        How ironic that Xamarin, originally designed for .NET applications on Linux, is now not for Linux anymore. I wonder what De Icaza would think of that.

                        1. 5

                          Considering how the Linux community treated Mono, who can blame them?

                          1. 1

                            I’m not sure who would have backed up Mono on Linux. Permanent second class, living in the shadow of an uncooperative giant. Icaza’s long term vision was always unclear to me, but history shows what it was: assimilation, annihilation ;)

                            1. 4

                              I have a different view of it, informed by rms’ fatwa against it that became the closest the Linux community got to an angry mob (i.e; screaming about Mono-based applications on distro CDs, trying to cancel a Debian developer for packaging it, etc.).

                              Microsoft in the 2000s seemed fairly cool towards it (i.e. adding Unix to the OS enum), but they were too Windows focused to promote such a thing. I understand why Mono had to wither away for MS’ own push for .NET on Linux (Windows users didn’t think it was viable, Linux users had prejudice), but I’m sad that a good project spent years in the weeds because of it.

                              1. 3

                                I understand why Mono had to wither away for MS’ own push for .NET on Linux (Windows users didn’t think it was viable, Linux users had prejudice), but I’m sad that a good project spent years in the weeds because of it.

                                I don’t think this is quite what’s happened. One of the goals of .NET 5 was to merge the Mono and .NET Core codebases. Various bits of Mono infrastructure are used in the Xamarin components.

                                This makes me somewhat sad because Mono was very portable but the .NET Core-derived bits are Linux/Windows/macOS and often lose *BSD/whatever support that Mono has had for ages.

                          2. 1

                            Too much money to care about that

                          1. 16

                            If you’re building a new app today, the kind of stuff you will need from day one for your service (excluding data persistency) is some load balancing and blue/green deployments. Shoving a full Kubernetes cluster just for that is really overkill.

                            And you’re relying on the Cloud(tm) to have a working Kubernetes cluster with a few clicks. if you’re using AWS anyway, just get an Elastic Beanstalk or any equivalent that gets a docker container up and running, load balanced and easily updated using some script.

                            All this, remaining cloud-agnostic, too. It’s just a docker container, literally every big cloud provider has some sort of “deploy a container as a service” and as a last resort you can have a VPS with Docker installed in no time.

                            1. 5

                              We migrated our ~10ish services from Beanstalk to ECS in two weeks. Beanstalk deploys started to fail once we got to 1000 containers for mysterious, AWS-internals related reasons that required asking AWS support to un-wedge things on their end. Migration was pretty painless since everything was already containerized.

                              If your service is stateless, runs in a container, and speaks HTTP, it’s pretty easy to move between the different orchestrators. Your deploy step needs to write out a slightly different deploy JSON/YAML/… and call a slightly different orchestrator CLI, maybe you need to do some one-time load balancer reconfiguration in Terraform. Far easier than getting apps used to stateful deploys on bare boxes with tarballs into containers in the first place.

                              1. 3

                                I’ll add my own anecdote: I migrated a stack of 5 services, with secrets and other related things, from EKS to ECS in ~3 days. The biggest single obstacle I ran into is that ECS’s support for secrets isn’t as featureful as k8s’s; specifically, ECS doesn’t have a built-in way to expose secrets in the container filesystem. But I found an AWS sample showing how to achieve this using a sidecar container; here’s my fork of that code.

                              2. 1

                                AWS is really the odd one out, though; I had once had one Kubernetes description instantiated on four different clouds (Google GKE, DigitalOcean, MS Azure, IBM Bluemix) with only the cluster names and keys changed.

                              1. 1

                                Using a full programming language for configuring k8s works for complex scenarios, or scenarios which involve a “systems team” providing an API for “development team” so they can configure their services in the organization infrastructure, while staying abstracted away.

                                But I wouldn’t use Go for that, I think it’s kinda overkill and don’t think it’s a good language for the task. You’ll appreciate many data manipulation mechanisms and Go is lacking those.

                                1. 13

                                  Excellent write-up! I’ve given a talk on a number of occasions about why Nomad is better than Kubernetes, as I too can’t think of any situations (other than the operator one you mention), where I think kubernetes is a better fit.

                                  1. 2

                                    Hey, yes I’ve definitely seen your talk :D Thanks for the feedback!

                                    1. 2

                                      Watched your talk and have some points of disagreement:

                                      • YAML is criticized extensively in the talk (with a reason, it’s painful) as being an inherent part of Kubernetes, when reality is that it’s optional, as you can use JSON too. And, most importanty, as you can use JSON in k8s definitions, anything that outputs JSON can work as a configuration language. You’re not tied to YAML in k8s, the results you can get with stuff like Jsonnet is way superior to plain YAML here.
                                      • I don’t think that comparing k8s to Nomad is entirely fair, as they are tools designed with different purposes. Kubernetes is oriented to fixing all the quirks of having networked cooperative systems. Nomad is way more generic and it only solves the workload orchestration part of the equation. As you well explained in the talk, you have to provide your own missing pieces to make it work for your specific use case. In a similar (and intended) example, there are many minimalistic init systems for Linux that give you total freedom and recombination… but Systemd has it’s specific use cases in which it makes sense and just works. UNIX philosophy isn’t a silver bullet, some times having multiple functionalities tied together in a single package makes sense for solving specific, common problems efficiently.
                                      • About the complexity of running a Kubernetes cluster: True, k8s as it is, is a PITA to administrate and it’s WAY better to externalize it to any cloud provider, but there are projects like the one mentioned in the article, k3s.io, that simplifies a lot the management.

                                      One thing we can agree 100% is that neither Kubernetes or Nomad should be the default tool for solving any problem, and that we should prefer solutions that are simpler, easier to reason about.

                                      1. 1

                                        I think you accidentally the link to the talk.

                                        1. 1

                                          Fixed, thanks :)

                                      1. 5

                                        Could be a huge marketing move for Microsoft to look cool, and wouldn’t hurt much their profits as it’s all about Azure now.

                                          1. 3

                                            Updated: Apr 11, 2019

                                            1. 3

                                              2020 doesn’t help the case. MS carried on selling Xbox, Windows, and Dynamics, and renting LinkedIn to recruiters. Intelligent cloud went up but Microsoft is far from “all about Azure now”.

                                        1. 2

                                          Will be the year of Notion once their API is publicly available and integrations start to come.

                                          Sounds depressing, I know, it’s the industry we live in.

                                          1. 1

                                            I wish they just made Android widgets already.. and optimized startup time on Android more.

                                          1. 4

                                            The perfect dev environment doesn’t exist, but we should have high standards for them, and try to improve our workflows, not just accepting the given tools and configurations.

                                            Also whatever dev environment you choose, just provide scripts or instructions for getting dependencies, running tests and building for production.

                                            1. 5

                                              Recently built my personal Kubernetes cluster but with some “tricks” to make it easier to reason about and maintain.

                                              • Using k3s eases things up because running a new node is running a single command. The memory usage is also lower and comes with Helm and a Traefik Ingress included.
                                              • Thanks to its simple installation process, there’s no Terraform code or similar, because I can manually run the command myself.
                                              • Configuration is written as Jsonnet scripts, with a function “App” that creates the typical setup of deployment, service, ingress and certificate given an image, a domain and the needed environment variables. This is the configuration.
                                              • It’s only one Hetzner machine, it’s easier and reasonable for my workloads to just use a bigger machine instead of having multiple nodes. (And you could say why in hell are you using kube for a single node? Well it’s simpler than a bunch of bash scripts glued together, and have cool stuff like zero downtime deployments included)
                                              • the only other thing in the machine is a PostgreSQL database that serves as the only state storage for the apps, so, I limit myself to only apps that require no other thing than a Postgres. Maybe in the future I add some disk storage in the cluster or something like an S3 self hosted api.
                                              • Managing the cluster (kubectl and so) is done by running kubectl proxy on the server and making a ssh tunnel to a local port. No need to deal with certificates and is done with a simple script.

                                              It’s honestly the simpler approach I could find to have something like my “personal cloud”.

                                              Edit: Added a link

                                              1. 1

                                                Thanks for showing your config and including all shell scripts in the repo!

                                              1. 3

                                                Did something pretty similar during my interview process for the company I’m working in. The exercise was to make a chat-like TUI with some specific rules.

                                                Being chat-like it didn’t use any ncurses-like crazyness and was limited to writing chars and newlines to stdout and accepting commands, so it was really easy to test the behavior in a end-to-end fashion, because the test could run the final binary in a child process and pipe stdin/stdout. Built some test utilites to write to stdin and read from stdout splitting on newlines and called it a day.

                                                With a static-site generator of mine did some tests that compared the file hierarchy and contents of a “expected” folder with the actual generated folder, and now can confidently update any npm dependency without fear.

                                                Writing e2e tests for relatively simple tools like these is really easy and should be more common!

                                                1. 18

                                                  I’ve been reading the Gemini specification, as well as this post, and my conclusion is that it’s just a worse version of HTTP 1.1 and a worse version of Markdown.

                                                  1. 6

                                                    worse version of HTTP 1.1

                                                    Strong disagree. TLS + HTTP 1.1 requires to perform a upgrade dance involving quite a few extra steps. The specification is also pretty empty regarding SNI management. Properly implementing that RFC is pretty demanding. There’s also a lot of blind spots left to the implementer better judgement.

                                                    In comparison, the Gemini TLS connection establishing flow is more direct and simpler to implement.

                                                    TLS aside, you mentioning

                                                    I’ve been reading the Gemini specification

                                                    sounds like a clear win to me. The baseline HTTP 1.1 RFC is already massive let alone all its required extensions to work in a modern environment.

                                                    1. 7

                                                      I agree that the specification is simple to read, but the specification itself is too limited and don’t find it suitable for the real world.

                                                      For example, I prefer HTTP’s optional end-to-end encryption because when working with internal routers within an infrastructure dealing with certificates is a PITA and a completely unnecessary bump in complexity and performance overhead being inside an already secured network.

                                                      I also disagree on that “extensibility is generally a bad idea” as the article says. Extensibility can work if you do it properly, like any other thing on software engineering.

                                                      EDIT: Or the requirement of closing the connection and re-opening it with every request, and all the handshakes that means.

                                                      For clarity about what I think could be an actual improvement: I would prefer an alternative evolution of HTTP 1.0, with proper readable specs, test suites, clearer https upgrade paths, etc; instead of an evolution of Gopher.

                                                      1. 4

                                                        TLS + HTTP just requires connecting to port 443 with TLS. I’ve worked on lots of things using HTTP for >20 years and I don’t think I’ve ever seen the upgrade protocol used in real life. Is it commonly used by corporate proxies or something like that?

                                                      2. 6

                                                        When I looked at it (months ago), I got the same impression. I find this article irresponsible, as Gemini does not merit the support.

                                                        Gemini’s intentions are good. The specification isn’t. For instance, not knowing the size of an object before receiving it makes it a non-starter for many of its intended purposes.

                                                        This is an idea that should be developed properly and openly, allowing for input from several technically capable individuals. Not one person’s pet project.

                                                        I’ll stick to Gopher until we can actually do this right. Gemini doesn’t have the technical merit to be seen as a possible replacement to Gopher.

                                                        1. 3

                                                          It does accept input from individuals. I was able to convince the author to expand the number of status codes, to use client certificates (instead of username/password crap) and to use the full URL as a request.

                                                        2. 4

                                                          I prefer to think of Gemini as a better version of gopher with a more sane index page format.

                                                        1. 7

                                                          In scenarios which precise better optimization of which tools are available, sure, let’s use containerd (for example) instead of Docker in our production machines for running Kubernetes.

                                                          But, sometimes, “monolithic” tools make sense. I want to use containers in my development workflow, which has a lot of requirements (running, building, inspecting…), what I need? Just Docker. It’s a no-brainer. And thanks to the OCI specification, that setup can generate images that run in production with a different set of tools.

                                                          People tend to call monolithic to stuff like if it were an obviously bad thing, but those exists because, sometimes, it just makes sense to have some set of functionalities tied together in a single package that is easier to reason about than a multitude of different packages with their different versions.

                                                          1. 4

                                                            I would be more sympathetic to this argument if Docker wasn’t a gigantic pain in the ass to use for local development.

                                                            1. 3

                                                              I agree. Docker belongs on the CI server, not on your laptop. (Or in prod, for that matter.)

                                                              1. 1

                                                                how’s that?

                                                                1. 3
                                                                  1. It’s slow
                                                                  2. It’s a memory hog
                                                                  3. It makes every single thing you want to do at least twice as complicated
                                                              2. 2

                                                                But, sometimes, “monolithic” tools make sense

                                                                I would even say than it’s the correct approach for innovation, right after R&D and through product differentiation. They went through that quite well. Docker’s problem is no longer an architecture nor an implementation problem, more that their innovation has evolved into a commodity.

                                                              1. 3

                                                                How does this compare to Paseto? Branca seems just more limited and opinionated.

                                                                1. 2

                                                                  I think this is meant to be more like itsdangerous than Paseto; similar to Paseto, it is meant to be algorithmically simple (XChaCha20-Poly1305 for authenticated encryption authenticated data, whereas Paseto supports XChaCha20-Poly1305 + Blake (for nonce misuse resistance) and Edwards-curves via Ed25519), but unlike Paseto, it doesn’t have a tie in to a JSON-like structure.

                                                                1. 3

                                                                  None at all. Let your work speak for you.

                                                                  1. 6

                                                                    The loading animations in that webpage should be considered straight up terrorism.

                                                                    1. 7

                                                                      I’m pretty sure they are intentionally bad. Ted has been a long-standing advocate of no-JS.

                                                                      1. 3

                                                                        Then that’s an excellent and awful job at the same time

                                                                      2. 1

                                                                        turn JS off and you are golden.

                                                                      1. 1

                                                                        Defining antipatterns considered harmful