Threads for SlightlyCuban

  1. 1

    It’s an interesting list, I personally see similar issues, but ask for different solutions:

    1. Unlimited Repository Size: I want the power of a DVCS that is local branching, local commits, with an option for a centralized model (a hybrid). For Open Source, the decentralized model is the best, for companies the trade off of being able to support significantly larger repositories (e.g. monorepos) at the expense of centralized infrastructure is useful. You can implement on demand fetching of data, back up commits automatically, etc. For pure source control, repository size is usually not a concern, however, developing a full 3D game with artwork, etc requires you to interconnect art and current codebase in a way that is best solved with putting both in the same repository, easily exceeding hundreds of gigabytes.
    2. Permissions on dir level: Yes, agreed, permissions are needed. I add: A way to remove arbitrary commits from anyones checkout is required. For Open source this is barely an issue, for large enough companies, being able to remove commits is essential as someone at some point will accidentally commit data that is legally not allowed in the repo (e.g license issues, personal identifiable data, etc).
    3. Sparse Clones / Sparse Checkouts: I disagree here. This is a great solution, but i want a more userfriendly solution. I want a file system that virtualizes my checkout no matter the size but fetches files on demand when I need it. (see 1. = hybrid vcs approach). Git FS is a step in the right direction.
    4. Direct update / put: Yes, very useful particularly for artist. Interestingly, i extend this to direct checkouts. Artist workflows also sometimes need to checkout specific subfolders without touching other part sof the tree as it can be very expensive : I don’t want to update the 5GB 3D model in this directory, but need an up to date version of the texture here.
    5. I disagree. The whole idea of having the repository state bound to a branch makes thinking around source code significantly easier. Directories as a branch were a terrible idea in the sense that they are overly complicated, hard to follow and you intermix history lines within a reposotory checkout that is difficult to reason about.

    The rest I don’t have much opinion on. I do want a few things myself tho

    1. Meta-History Tracking: Git tracks your history as the engineer wants to have you see it, but it does not track the changes to the history itself besides a reflog. I like to see the ability to track changes of history in a graph like structure. Mercurial’s obsolence markers are doing these and allow extensions such as evolution to automatically find the correct rebase targets, share rebases in a more meaningful way ,and generally allow both the human and the system to think more “how” a history came to be.
    2. Better GUIs: Both Mercurial and Git gui’s I’ve seen are centered around a programmers view into source control, but are horrible to use for non-technical people like artists or others that must sometimes use these too. A gui that focuses around the primitives that people are used to (e.g. file trees, etc) is missing and it’s too centered around the history view atm.
    3. Automatic commit backups: I want to be able to share much simpler if I agree to it. Have my local commits automatically be pushed to a central server, if i agree to, and if someone knows the hash and the repo, they can get the commit. I want to put into slack: hey can you take a quick look at ad42ddb212 and be done with it.
    1. 1

      Automatic commit backups

      Just a thought, but do you have something close to that in Git/Mercurial with push? While not fully automatic, both have the concept of pushing all refs to a remote, which someone else can see/pull/fetch via sha. That what you’re thinking of here?

      (both those two also have the idea of hosting directly from your local clone, but I’m assuming you’re not talking to people on your same network).

    1. 1

      Maybe I’m misunderstanding the author, but something feels off about the “Push/pull bottleneck”. If you have conflicts with what is upstream, you must resolve them, regardless of what VCS you’re on. Comparing my experience with Git and SVN here, I much prefer Git; it has git fetch. So I’m able to easily see incoming conflicts without immediately incorporating them into my workdir.

      As far as I know, SVN gives me checkout, which will force me to resolve conflicts right then and there. My experience is that this encourages the team to create larger–not smaller–patches, that are inevitably harder to integrate. Between Git and SVN, I would say SVN is the one with the bottleneck.

      1. 9
        1. git rebase -i -x "make test" <target> while you rebase. True, git bisect should never report a false positive, but I like to actually be sure (side-note: I’m working on getting my CI to verify all commits in a branch).
        2. rebase merge-conflicts are often smaller, just more numerous. Personally, I find several small merge-conflicts easier to deal with than one big one.
        3. I rebase so I can guarantee the merge result is identical to my branch HEAD. This means I can cut builds from a branch, and if the merge is approved, promote that build. Which also means I make a speculative deploy of the branch, and report any issues from that before I merge it in.

        Finally, I rebase because I’m telling a story. I don’t want the commit history to exactly represent what I did; I want history to show what I intended to do. When I do git blame, I want to see “make this change for reason X”, not “address review comments”.

        P.S.: There are other ways to achive #3, but then I’d argue for merge --squash.

        1. 3

          I was about to look up The Three Rules of TDD when I hit this gem: https://blog.cleancoder.com/uncle-bob/2016/03/19/GivingUpOnTDD.html . Specifically:

          His tests are tightly coupled to his production code.

          These days, I don’t make much of a distinction between unit and integration tests. Staying focused on the minimal amount of code to test a feature, and keeping an eye on the design, usually leads me to just the right amount of code.

          1. 2

            Wow, thanks for sharing that Uncle Bob post. I wish I had read this sooner in my career.