1. 41
  1.  

    1. 34

      Simply because GHA are not trivially runnable locally i’ve resorted to never rely on the environments and runtimes they provide. All my actions either use nix or docker to run a command that is just as easily launched locally from a justfile. At this point the GHA yaml files only contain a trigger condition and an entrypoint. Sometimes you also need some glue actions to publish lints to the PR or to upload an artifact, but those are not of interest to run locally anyway

      1. 3

        Hey great idea! I’m going to float using Nix in our builds with the team. That would assist with local running too.

        1. 3
          1. 2

            Yeah, it is unfortunate. Maybe some day we will be able to bring it back (I’d like to.) To be frank, though, I’m not totally surprised: building out a weird extension to another platform’s primitive isn’t totally “within the envelope” :’).

        2. 2

          Same here. I “use” GitHub Actions in the sense that I do the bare minimum to get a Bazel binary and just execute a hermetic build graph. No more dealing with bundled crap in their image. I have no idea why they stuff it with tools for every single programming language ever (especially since that stuff is usually outdated, why??).

          1. 2

            i’ve found bazel to be an absolute nightmare for hermetic builds

            both the C++ and shell rules just go rummaging through your environment variables and hardcoded paths looking for toolchain dependencies. is it better with other languages, or are you working around that somehow?

            1. 1

              I’m primarily working with Python and C#, however for the cases when I had to build C++, using the hermetic Zig toolchain helped tremendously with making things “just work”: https://github.com/uber/hermetic_cc_toolchain

              I used it to build a version of a very large codebase that works on Glibc 2.27+ (Ubuntu 18.04 and higher, don’t ask why) from any computer, no sysroot required.

          2. 2

            For my latest project I bit the bullet with nix, instead of it just being an afterthought or dev shell. Only one nix github action (plus the ones you mention, which should be builtin to GH not actions IMO). Runs all tests and other checks in the flake, does caching for cargo with crane and cachix whis is super fast. Works like a charm, both locally and on the CI!

          3. 9

            Our code sits in a monorepo which is further divided into folders. Every folder is independent of each other and can be tested, built, and deployed separately.

            So what’s the point of using a single repository then? The code clearly is independent of each other, and they want it to be tested independently of each other.

            1. 20

              Why would you split it? Maintaining separate repositories just seems like extra bookkeeping and toil.

              1. 4

                It is. So much overhead.

                Multirepo in the same repo is the way to go.

                1. 3

                  Multirepo in the same repo is the way to go.

                  My understanding of the term “multi-repo” is that it refers an architecture wherein the codebase is split out into separate repositories. Your use seems to mean something different. Are you referring to Git submodules?

                  1. 2

                    Many people consider a monorepo a situation where all the things in the repo have a coherence when it comes to dependencies or build process. For me a monorepo is also worth it if you just put fully independent things in separate subfolders in the same repository.

                    Are you referring to Git submodules

                    I would never. git submodules are bad.

                2. 3

                  access control, reducing the amount of data developers have to clone, sharing specific repositories with outside organisations, avoiding problems exactly like this blog post outlines, etc.

                  Now I know you’re going to say “well, we’ve got tooling that reads a code owners file for the first, some tooling on top of git to achieve the second, and an automated sync job with a separate history for the third” but all of that sounds like additional tooling and complexity you wouldn’t need if you didn’t do this. I think the monorepo here is the extra bookkeeping and toil.

                3. 7

                  Consistent versioning

                  1. 8

                    We tried this too, releasing loosely coupled software in a monorepo all with the same version numbers. In this case semantic versioning doesn’t make sense since a breaking change in one package would cause a major version bump. But another package might not have any changes at all between those major versions. In this case the only versioning scheme that would make sense is date(time) based versioning. But that can be achieved without using a monorepo. I agree with ~fratti, the benefit of a monrepo is not obvious.

                    1. 4

                      Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                      1. 2

                        In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist. The fact that mono-repo tools and the people who use them encourage throwing away semver is evidence to me that the modularity pendulum has swung from micro-everything to mono-everything in far too extreme a way.

                        1. 2

                          In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist.

                          Why do you care? The entire point of a monorepo is saying “Everything in the repo at this point works together, so we release it at that commit ID”. In every monorepo I’ve used, the only identifier we ever used for a version was the commit hash when the release of the software and all its in-repo dependencies was cut.

                          It seems very strange to talk about versions in a monorepo – the entire point of a monorepo is to step away from that.

                          1. 1

                            I think there are some folks who are missing what you describe as the point of monorepos. It sounds like the context(s) in which you use them are basically atomic applications. The parts of the application may be deployed in multiple contexts, but they are not intended to be used separately. I can see the appeal of monorepos there. Unfortunately, my experience has been considerably messier. Where the line gets crossed is where the pieces of such applications become public. Libraries get published as packages to registries. Web services get public docs. Now I don’t just have application users, I have users of the pieces of the application. This is where I start to care about versioning, because the users of these pieces care. Mileage clearly varies, but the tendency of people to treat monorepos as the default choice has, for me, resulted in inheriting monorepos that might have started as atomic applications but are no longer so. The benefit has been a few saved git clone commands and some deployment coordination/ceremony. The loss in time to tooling issues has been considerably more than that.

                        2. 2

                          Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                          Are you asking me or about the original article?

                          We release several of the loosely coupled software pieces within the company. In that sense not all is in the same commit (or even same repo), downstream/outside users aren’t, so we need to use version numbers. So in my mind a monorepo really only make sense if you’re okay with datetime-based versioning or if you’re working on tightly coupled pieces of software that you test and release together.

                          About the original article I don’t know why they care or if they even do.

                    2. 4

                      The code clearly is independent of each other

                      I’ve never used a monorepo, nor do I have any strong feelings for/against them. But I have seen them. This is kind of just how they usually end up, I don’t think this defeats the purpose of a monorepo though.

                      1. 1

                        It can be tested, built, and deployed separately - it can also be done together and without juggling versions, repos, dependencies, rollouts…

                      2. 8

                        The mistake here is trying to use GitHub actions as anything more than a way to launch ‘make test’, or your favorite locally runnable equivalent command.

                        1. 2

                          “15 engineers and constantly pushing” is the new “8 megabytes and constantly swapping”.

                          joke aside, the local runners are indeed a bit frustrating, basically you need one VM per runner which doesn’t really scale. Like others in this thread my solution to this is to put most of the deployment in bash scripts (ex. : https://github.com/ossia/score/tree/master/ci ), sadly you then loose the ability to use many of the cool pre-made actions for many use cases. Just building on nix is definitely not enough, very often I’ll have something that builds fine on Nix but fails on another distro with a slightly different GCC or Clang setup

                          1. 1

                            joke aside, the local runners are indeed a bit frustrating, basically you need one VM per runner which doesn’t really scale.

                            I was looking into this for CI; the official recommendation is you have some other service that can spawn runners, via whatever mechanism. They provide a sample using k8s, but I suspect you could do something that requires less care and feeding. Maybe someone else has already done it; it’d save me the effort of having to write it if so.

                          2. [Comment removed by author]

                            1. -1

                              Nowadays I use AI to write GHA yaml and never care about maintainability or reusability. Just use it as one-shot trash code. Ask AI to rewrite everything when I want to add anything that isn’t too obvious.

                              1. 1

                                Doing that securely supporting SLSA sounds impossible though..