1. 31

My personal summary on this submission two days ago and the discussions here and on the orange site.

  1.  

  2. 6

    Many companies should value the team independence over the crosscutting changes.

    I strongly prefer the ability to make cross-cutting changes, and the ability to ensure everyone builds against the latest version of any shared code. Though I admit I have no robust, data-driven argument for my preference.

    That said, I feel like public endorsements of the monorepo approach imply these massive, cross-cutting, backwards-incompatible changes are made in one atomic changelist. I can’t speak for Facebook or Microsoft, but at Google that never happens. For one thing, such CLs would be impossible to submit, as some file would inevitably become outdated.

    This problem occurs even when submitting to individual highly edited files like the global pager list, often requiring several submit attempts and resyncs during US working hours. But the pager file is extremely old. Newer configs use fragments broken down by team or project (not unlike .conf.d directories).

    As the author points out, central databases in general are more important than a monorepo specifically. For mass refactoring, a universal code search database would be sufficient. For security issues, a central database of insecure commit hashes.

    1. 7

      I think a lot of this is also about the surrounding tooling. Obviously Google does its own thing, but CI tools etc have no level of granularity larger than “the repo”, so even just making a small change that touches two linked projects is a massive pain. Even in really small teams it quickly becomes frustrating, and I have no idea how microservices + many repos became the thing when everything around the tooling pushes against this.

      I would love to multi-repo, but it’s a major uphill battle.

      1. 3

        You need to write extra tooling for a monorepo anyway. Why not write that tooling to not live inside a single repo?