1. 7
  1. 2

    Something I’m not clear on after reading that: does this work because http-fetch resumes at the start of each pack file, or is http-fetch actually able to resume in the middle of a pack file? Hence does this work pretty much any time a repo is available via http or does it require the remote side to do some extra work to break up the pack files into chunks small enough that resuming works?

    1. 2

      I’m sorry but I didn’t test that case, I assumed that the “resume incomplete packfile” case would variate from git server implementations, and because of that focused in assumption that it would start download the packfile from the beginning since I wanted a universal solution that should work with any server implementation.

      1. 2

        Ah, thanks. I wasn’t quite sure whether the splitting the packfiles into 1MB blocks was something you did to make it easier for resumption to work, or whether it was only done to make testing that the method works easier.

        FWIW, all the commonly used httpds that I know of (e.g. Apache, Microsoft IIS, nginx, I’m almost sure lighttpd does too) support HTTP range requests for static files out the box with no configuration. I wouldn’t be surprised if resuming individual files via HTTP turned out to work on every single implementation that you find in the wild.

    2. 1

      I’m impressed by how good a job this article makes of establishing its motivation.

      My first thought upon reading the title was “that sounds like it could be useful but something is probably direly wrong if you need that”. I wasn’t thinking about bad network connectivity, I was thinking about repos that are large fractions of a terabyte. The first paragraph then immediately explains why this can actually be a problem for ordinary sized repos, too.