1. 60
  1.  

    1. 46

      At the risk of adding to the “Nix is the solution to all problems” chorus here: using nix for CI isn’t just a solution to the two problems mentioned (open source, and you can run the CI build locally), but also bundles in the obvious next step, which is eliding the distinction between where a build is run altogether. So a CI build isn’t just the same in terms of logs and result as a local one - I.e reproducible - but can be substituted for a local one (via caches). That is, run only once.

      (I’m biased though - I run https://garnix.io, which is a nix-based CI)

      1. 32

        Does it solve the problem of being forced to use nix? :)

        1. 31

          Some people, when confronted with a problem, think “I know, I’ll use regular expressions nix.” Now they have two problems.

        2. 30

          After one has been forced to use setuptools, CFEngine, Puppet, Ansible, CloudFormation, Elastic Beanstalk, npm, and Bazel; is Nix really so awful?

          1. 9

            Nix is awesome. It’s learning Nix that’s awful.

            1. 3

              It’s getting better. Keep pushing.

              1. 1

                Good to hear! I have tried it twice so far (I’m not sure when, but the most recent time was at least 3 years ago) because I really like the idea of reproducible builds and the approach to configuration, but didn’t last very long.

                So maybe (hopefully!) my perspective is getting outdated. I’m also the type of person who sees the OS as something that should (at least mostly) just work, and that attitude is probably not the right one when starting out with Nix. It just didn’t click for me.

                1. 2

                  Oh, the OS? I won’t touch NixOS.

                  That’s one of the major issues with this that it’s both an OS, a language, a build system and a package repository but it’s not an issue that people seem to be willing to solve.

        3. 4

          The only solution to being forced to use Nix is to quit programming. You either use Nix because you have to or you don’t use Nix even though you have to. Anything else is a coping mechanism :)

        4. 3

          Well, we did also release https://garn.io/ recently, which is a Typescript frontend to Nix with a nicer CLI. So kind of? ;)

      2. 12

        Are you by chance German speaking? Because „gar nix“ (abbreviated from „gar nichts“) translates to „nothing at all“ at all.

        1. 10

          Sort of German-speaking (I do live in Germany). And yes, that’s intentional ;)

      3. 3

        Bazel and Buck both operates in similar fashion: hermetic build that would only need to run once and cached. They take it one step further than Nix: distributed build execution and multi-platform support.

        1. 4

          Is Nix not capable of distributed build execution and multi-platform support? I frequently use distributed builds with Nix, and I use it on multiple platforms (incl. cross compiling) just fine. In my experience, Nix’s multi-platform support makes significantly more sense than Bazel’s. How do you even set up a cross compilation environment with Bazel?

          1. 1

            Nix does mot support Windows last time i check. The remote distributed is also relatively immature (ssh based?) compare to both Buck and Bazel, both use a well established API.

            Bazel in particular have distinct definition for “execution” platform, which is where you build action get executed, and “target” platform, which is the intended platform for the final artifacts. A distributed build could involve 1 “host” platform (your laptop), multiple execution platforms and multiple target platforms. A platform could be different OSes, CPU architectures, CC compilers, Android/iOS SDKs etc… Plenty of companies leveraging the technology to build a big range of products.

        2. 3

          Note that Bazel does not offer full hermetic-build support by default. Their Python integration has to be supplemented, for example; search “hermetic bazel python” for several tutorials and explanations.

          1. 1

            I think you gona need to go down to hardware level to achieve decent hermeticity. Everything else is just a compromise mid way and tradeoffs based on your risk tolerance.

            Bazel and Buck offer decent configurations for a wide range of tradeoffs. Which is both a curse and a blessing

            1. 1

              You were comparing Bazel and Buck to Nix, and claiming that they both do better. I gave a counterexample where Bazel doesn’t do as well as Nix. Going “down to hardware level” is a red herring and non sequitur.

      4. 1

        How can I deploy an Elixir/Phoenix app with Nix, to, say, fly.io? And how can I do CI with it on, say, github actions? I already have a nix file working that defines its build environment.

        1. 2

          Regarding the first question, probably using something like buildImage to generate a docker image.

          Regarding the second: there are nix actions, but in my opinion it’s quite hard to set up caching, and logs, right. Hence my having built garnix.io

          1. 1

            Nice, just signed up for garnix.io !

    2. 21

      Make your whole CI pipeline a single script and call it from the CI configuration of your vendor.

      You’ll just miss the separated output by stages and you’ll have to manage parallelism by yourself, but you’ll be independent and the script will work locally.

      1. 12

        Since the inception of CI servers, I have yet not found the answer to the question: so it’s running a script on commit?

        I have used all the major CI/CDs out there and they are virtually all unnecessary fluff to just automating a script. Which is what I always do anyway. Point and click UIs to create fragments of functionality, forms, YAML. I find all those a bad idea and a poor excuse for being a shellscript illiterate.

        1. 23

          A defined pipeline gives you things you can’t do or shouldn’t do in scripts. You can see the pipeline steps before they happen, you can see the progress, the steps can be split between different machines or automatically parallelised, it can handle retries of steps, you can process them programmatically, you can define the semantic action once showing the implementation to update automatically, etc.

          You could handle some of it explicitly, but if you do it more than twice, you’d extract it into a library and effectively implement a very custom pipeline runner yourself.

          Also, you could just use one script in a trivial case. But it’s very common to go past that. My two most commonly contributed to repos have: unlockable steps, ~30 parallel jobs, 4 different architectures for testing at different stages.

          1. 7

            So, it’s running a makefile?

            1. 19

              Does make have native story got the things I mentioned? No.

              1. 10

                FWIW my go to last few years has been relatively rich Makefiles, but then having CI trigger various major steps.

                So I might have make macros for “run pytest”, “build dockefile” etc, and larger “build component X” in Make.

                Then in CI I have “X step: run make recipe X”.

                This makes the whole CI pipeline runnable locally, useful for eg debugging larger integration tests and deploying with CI is down - but also it makes the loop to develop CI steps fast. But it also maintains the nice bits you’re highlighting here

            2. 1

              It could, if thats what your project requires.

              The versatility is the point. You don’t have to write a lot of yaml. Just, enough.

        2. 1

          The yaml method is a batch job processing system, is what you seem to be stating, and it is true - but this is a feature, not an obsolescence. If you need to run BASH for your project, you can.

          But you can also do other things with these yaml properties. The structure is useful when integrating with, for example, documentation generators, compliance and testing logs, and predictable reproducibility.

          yaml is a good up-front data description language, but you can glom it into many things.

          My favourite ultra-simple github_action fu, goes like this:

          yq -o=props '.' project_workflow.yml | vim - 
          

          y’,now, just to get a good overview of the structure of things, aesthetically, followed by .. in my case quite handy ..

          yq -o=lua '.' project_workflow > project_workflow.lua
          

          .. which is a) extremely, extremely useful if your project is Lua (see also: javascript, etc.), and b) not exactly easy with a one-BASH-to-rule them all mentality, unless of course you .. code for such cohesion.

          Anyway… Using yaml to breadcrumb the build steps - particularly for uniquely configured projects - means flexibility. Its not as redundant as you might think. I of course have lots of projects with Makefiles and their own generators, but to wrap things into a reproducible build in yaml is, these days, just an hour or so of work .. and from that point on, anyone can build the project

          That helps developers get disconnected from the marketing people very, very easily, mmkay ..

        3. 1

          You might like https://laminar.ohwg.net/

        4. 1

          It runs a script on a bunch of machines (so, windos+linux+mac distributed system). That’s the real thing, everything else is fluff.

        5. [Comment removed by author]

      2. 3

        I think there is a sweet spot between the two extremes (coding all your logic into the GitHub workflow YAML or coding everything into a single script).

        You can have separate workflows for separate tasks, and separate jobs within a workflow, but make sure that what is actually run is simple scripts or commands that you can run locally or as part of your pre-commit hooks.

      3. 2

        This is what we do, just using Earthly instead of a script.

    3. 14

      Small nit, the runner is open source (https://github.com/actions/runner), but the dispatcher/orchestration layer is not.

    4. 12

      closed source runner, can’t run locally

      A good reason to switch to GitLab? You can run your own instance locally (including on the free version), and the runner is open source : https://gitlab.com/gitlab-org/gitlab-runner Or use the GitLab.com version so you don’t need to set anything up yourself (but limited runner time and then you need to pay for more?).

      Doesn’t help with CI pipelines being in YAML, which is always horrible, but does have templating to reduce duplication without writing your own DSL or preprocessor.

      1. 5

        Yes, GitLab Yamls are much more reduced over GitHub actions, fundamental things like git clone don’t need some additional step and there is an interesting project for running some of the CI aspects locally, which has been kept amazingly long alive for an individual’s project.

        https://gitlab.com/AdrianDC/gitlabci-local

        Nevertheless you’ll still wait countless hours for pipelines to succeed, with adding variables to trigger only certain jobs or stages this can be reduced significantly

      2. [Comment removed by author]

    5. 11

      I see folks are suggesting alternative open source solutions - have you looked at Sourcehut? Fully open source and features (IMO) quite a nice CI system.

      1. 10

        The main selling point is being able to SSH into CI nodes so you can muck around until the command succeeds, which I think would solve most of this posts’ complaints. I agree the iteration time of developing a CI by pushing commits then waiting for it to run is brutal and makes it all take 10x longer than it should.

        1. 6

          Aye this is my favourite feature on CircleCI, that it’ll just drop me into a shell on a failed build step is gold, and the SSH auth is magic.

          Combined with putting the “meat” of the build definitions in Make or similar, so you can do most work locally before pushing, and then any final bits of debugging in the CI shell, it’s not bad.

          I’m very intrigued by Nix tho, all these people here are giving me FOMO

        2. 4

          I’m flabbergasted that anyone would use a system that lacks this feature. It must make debugging so frustrating.

          1. 3

            It is. And frankly it feels embarrassing. You sit there crafting commits to fix the issue and if anyone is getting notifications on the PR you are peppering them with your failures. Would not recommend.

      2. 1

        I’m a customer, and it’s been on my list to figure it out for a while. The way it works feels just different enough from other stuff in the space that I haven’t gotten ‘round to it yet. Do you know if there’s a write-up of something like running a bunch of tests on a linux image, then pushing a container to a remote VPS after they pass?

        The docs seem good, but more reference-style, and I’d really be curious to just see how people use it for something like that before I put in the labor to make my way through the reference.

        1. 2

          There is no tutorial in the documentation indeed, but starting from their synapse-bt example and evolving from it is sufficient from my experience.

          The cool things about SourceHut is that you don’t need a Git (or Mercurial) project to run a CI pipeline. You can directly feed a Yaml manifest to the web interface and have it executed. That plus the SSH access to a failed pipeline makes it quite easy to debug.

    6. 5

      I’ve only read a little about it and never used it, but Dagger might be a solution. Dagger Engine, a “programmable CI/CD engine that runs your pipelines in containers”, allows all CI actions to be written in any programming language Dagger has an SDK for and runs them locally the same as in the CI environment.

      Dagger Engine is open source and self-hostable. There is also a proprietary Dagger Cloud paid service that “complements the Dagger Engine with a production-grade control plane” that provides “pipeline visualization, operational insights, and distributed caching”.

      A similar pair of products is Earthly (open source) and Earthly Cloud (paid service). Where Dagger has you define pipelines in a general-purpose language using an SDK, Earthly has you define pipelines in Earthfiles, which combine elements of Dockerfiles and Makefiles.

    7. 4

      How is this a problem specific to GHA, though? Every CI thing I’ve worked with has had the exact same problems (takes forever to run, can’t test locally).

    8. 4

      I agree. I also agree for another reason. GitHub is constantly having outages (anecdotally problematic … they don’t all impact actions. :) ).

      I see two solutions:

      1. Use a single step that calls a driver that you fully control
      2. Build a CI framework that abstracts away the features of Actions/Generic CI, and write your builds against that.

      Then, when GH Actions goes out of favor, you retarget and move on with your life.

      Edit: caching, and that sort of thing is really helpful in CI to reduce build times, among other features like Workload Identity, etc.. A script doesn’t get you super far as things get complicated.

    9. 4

      The state of actions configuration is tremendously sad though. I’m sure others have as well but the Python ecosystem has had a great tools for running tasks for time immemorial in tox. It lets you define “test envs” and jobs to run in those envs, a big advantage is that those envs can be generative so n-dimensional matrix testing is built in the system (usually you generate over python versions but you can also generate over dependencies, or arbitrary tags). You can run tox in a local project and it’ll run all the jobs, tox -p will distribute them over any number of workers.

      And obviously you can invoke tox from your GHA, but but all the matrix & concurrency stuff, already encoded into tox, have to be duplicated into bespoke CI workflows, with additional CI-specific configuration for the steps tox basically implies. And a 30 lines tox config maps to 120 lines of shitty yaml, and you now need to keep the two in sync.

      Plus (and IME that’s where the hurt really shows up) keeping the branch rules in sync if you have churn in your test matrix.

      Hosting a git repo is hardly more than providing a file system and SSH access. The actual mechanism they use to keep you on their platform is the CI-pipelines (and maybe the issue system and wiki, but less so).

      Meh. Github was dominant long before they released Actions. The same issues listed exist in basically every CI runner I’ve seen since Travis, the only way to avoid them is to set up your own CI pipeline (which github actually lets you do fairly easily, if only by necessity due to originally not bundling one).

      If like TFAA you’ve been migrating CI hosts for 6 months and are still not done, you’d probably have been better off asking your sysadmins for a machine and plugging into the github webhooks / API.

    10. 3

      I need to write a larger post about this at some point, but:

      • Use GitHub actions to define the matrix of machines to run your CI. This is a heterogeneous distributed system, so you want to keep it as simple as possible. But if your software is cross platform, this is a part of essential complexity.
      • For everything else, write a “script”:
        • If you target only POSIX, you might use sh, but be mindful of the myriad of annoying differences between Linux, BSD and Mac worlds.
        • Ideally, write this script using the primary language of your project. The right tool for the job is often the tool you are already using. Rust and even Zig are fine scripting languages, anything higher level certainly works as well.
        • If you don’t have a primary language, or just want to bring on another dependency, take a look at dax or zx.
      • If you want to see nice grouping into sections in GitHub’s UI, use group/endgroup markers: docs.
    11. 2

      To make it easier to run things locally you can consider making the GH Actions/Gitlab CI a thin wrapper around something like https://taskfile.dev

      Obviously this doesn’t solve the problem of installing the right dependencies for your build, but at least you can quickly run the CI in your local environment, without having to develop inside of a Docker container…

      1. 3

        I do this with gitlab and justfiles, there’s still some pluralities, but it’ll scratch the itch until nix is fully off the ground for us

    12. 1

      I spent some time prototyping a CI workflow in temporal.io. It was a vastly better experience. You still got the visibility into stages and dispatch and reliability of a workflow system, but you were writing a program in a real programming language, no YAML or bash scripts. You could develop it entirely locally in a real temporal instance, and write tests for its pieces.

      The things you start considering when you’re back in a real programming language are interesting. For example, can I pick the tests to run dynamically? Given the lines touched by this change, what are the most informative tests to run given past behavior? A lot of tools like pytest assume they’re going to only be called from the shell so I had to write against their undocumented internal interface, but, even with that, it was such a better experience.

      Plus the place I was prototyping this at already had serious production workloads on temporal and some actual chops at running it, whereas the mix of GitHub Actions and Jenkins that I was trying to replace were used only by one understaffed team that was responsible for them. No one else wanted to touch them, so they were always kind of half assed.

      I think it’s likely that dedicated CI systems are an idea whose time has come and gone.

    13. 1

      Sounds like maybe the answer is to make your CI script build off an open source Lua library? Heck, maybe even Tcl?

      Sure you’d basically write a CI runner in that language, but it’d be a library that anyone can use, it’d be open source, the CI runner on the server just has to be able to load the library and run the script, and you could run it locally. And for simple uses you could make it mostly-declarative.

    14. 1

      A few weeks ago, I was working on a pipeline to build Docker images in GitHub Actions and I can relate about how slow the iteration process is. Changing even a single character required minutes of waiting to see if my change worked. Lot of friction, very frustrating,

      And yet, it seems that we can’t not use those tools. When we don’t, we get called out for not following modern best practices. There’s got to be hope that at some point developer tooling won’t require expensive proprietary cloud services, dubious AI, or thousands of lines of spaghetti YAML, right?

    15. 1

      and then wait for a runner delays everything indefinitely

      I’m not sure if the author meant “wait for a runner to pick up the job” or “wait for the runner to process the job”. In the first case you can add self hosted runners that guarantee some minimum throughput.

      I wonder if you could interact with the self hosted runner to manually trigger jobs / parts of the jobs and solve the second issue as well.

      1. 3

        I mean both. We have self-hosted runners, but there is not always an available one. When a runner picks up a job, there is quite the overhead for starting the job, downloading docker containers, etc. And even if you could control a runner in detail, it’s just way more work that rerunning the last command you typed into your console, to run it locally.

        1. 1

          For self-hosted runners, you don’t have to fully reset between jobs. If you’re using containers for your build, you can cache them on the node and get near-instant start.

          There are three things that you get from a CI system that justify the cost relative to running locally:

          • A known-good build environment. Every GitHub job runs in a pristine VM and the only path for an attacker to inject malware is via the source repo. This makes auditing a lot easier.
          • Fast machines. It’s cheaper to buy a handful of 32-core machines (or VMs) than it is to buy one for every developer. GitHub’s free runners are incredibly slow though (my laptop runs builds about five times faster than the GitHub machines and about twice the speed of an 8-core Cirrus-CI machine).
          • Heterogeneous environments. I can push code to a PR and test it on Windows, macOS, Linux, FreeBSD, NetBSD and OpenBSD, and on multiple architectures (some emulated). I don’t have all of those environments locally and I generally don’t want to maintain them all locally. For some projects, we’ve added $3000+ FPGAs to the CI machines and I don’t want to have to set those up locally.

          A lot of the delay problems can be solved by spending more money. The problems of reproducing all of this locally can be solved only with a lot of engineering investment, which adds up to a lot more money.

        2. 1

          If you’re developing a pipeline, you could ensure it’s a separate repo going to one dedicated runner on your machine, so there’s no wait time and you always have the containers from the previous run already cached. (I get it doesn’t solve everything, but still…)