1. 22
  1.  

  2. 5

    They rolled their own dependency management!? And at such a high cost (roughly 1000 engineer-hours, assuming a team of 4)? Why not just use Nix or Bazel? Do people not realize that these tools exist, or is it just NIH syndrome?

    1. 19

      I think saying “just” use Nix understates the difficulty there significantly. I say this as someone who uses Nix, but if someone’s looking for an off-the-shelf dependency management system and the first step to use it is “hey, learn this functional language,” they could be forgiven for assuming it will create more problems than it could possibly solve. Also, packaging new things really is a pain. The build environment is under-documented, it’s hard to pull up an interactive equivalent to the build env, and failed builds are, as far as I know, thrown out instead of saved for debugging.

      I haven’t used Bazel before, but its Github blurb claims it’s designed for “A massive, shared code repository,” which isn’t the problem these folks were trying to solve. Maybe it works fine for multi-repos as well, I don’t know.

      1. 5

        I’m a big fan of Bazel (I’ve helped two companies now transition to it), but it’s not a polyglot panacea; if you’re using the languages Google uses (Java, C/++, Python, Go) and building for a platform that Google releases on (Android/iOS) then it’s wonderful; for anything else, you’d have to do a massive amount of work to integrate it. The article mentions using C# and Perl; you’d have a tough time using Bazel for those.

        I think this gets at a serious difficulty of the very-in-vogue polyglot codebase: it’s all well and good to let programmers choose whatever tool they want, but it comes with a serious devops cost. I don’t know of any tools for building, dependency management or continuous integration that truly work well with a whole bunch of implementation languages. Bazel is the closest I’ve seen, but it doesn’t even support building on Windows.

        1. 2

          I agree with many of your assessments, I hope we can improve in all of these areas :) For what it is worth, you are able to keep failed builds by passing --keep-failed to nix-build.

          1. 1

            It’s the external version of Blaze, Google’s build system.

            Some changes can rebuild the world. :P but Blaze/Bazel makes it possible and usable.

        2. 3

          One thing I’ve been wondering is: how do people make CI builds work in a monorepo? Do you rebuild everything on every checkin, or do you detect that a subproject hasn’t changed, and thus is not rebuilt? In the latter case, how does that then give you (what I assume is) the main benefit of monorepos: simplified dependency management, because you just use the dependency with the same revision.

          1. 2

            What I’ve done in the past is build “everything,” with build caches and aggressive dependency markings. If you’re at the scale where having a monorepo is genuinely a pain, the it’s time to get a sane build system (literally out-of-the-box Gradle works fine if you just have Java/Kotlin, for example, but something like Bazel/Pants/etc. works great for more generic workloads), at which point the builds that don’t need to happen will be no-ops, and you never need to worry about forgetting to rebuild/upgrade something.

            For me, this is really coming down to trade-offs. Configuring something like Bazel or Pants (or even potentially Gradle at the scale we’re talking) does take some effort. The advantage, to me, is that you either will end up doing this anyway with multirepos (at which point, you might as well have a monorepo where you can monitor all the changes anyway), or you handle all the upgrades piecemeal and bespoke, at which point it’s gonna be really tricky to ensure you’ve actually upgraded everything. In other words, we’re trading human steps for automation: if you’re at a size where you “need” multirepos, you’re also at a scale where automating all of this will be a huge win, which in turn, IMHO, negates needing multirepos.

            1. 2

              The chromium project uses the analyze functionality of gn (initially implemented in gyp) to determine which test binaries are affected by a change and only build/run those tests. Since this relies on the gyp/gn dependency graph, various checks and whitelists are added to cover edge cases. This, combined with the task-deduplication abilities of Swarming heavily reduce the load of ‘building everything’ in a monorepo. For Swarming, test binaries are compiled in a deterministic way so the same code produces the same SHA, if a SHA already ran and succeeded you don’t need to run it again.

              1. 1

                We use Teamcity, and it has a way to make triggers only work for a regex of directories. So, only commits to X subproject trigger builds of X.

              2. 2

                Does this lack of agreement come down to some things being hard to measure, technical friction being one of them?

                monorepo proponents would say that the friction caused by managing the multiple repos and dependencies would take a bunch of effort.

                splitrepo proponents would say (I’m not sure I’ll do this argument justice, since I don’t really see it :-) that the monorepo won’t scale or will cause friction due to the history for a sub-project containing lots of merges and similar for unrelated parts of code, plus issues relating to differing release cycles (stabillity etc).

                I think in both cases the major argument for/against is something like “we will work more efficiently this way”. That could in principle be measured, but only as well as estimation/velocity measurement I guess.