I’m really excited for pijul (or whatever its latest name is) but my historical codebase is the last place I want something experimental or unstable.
I asked this question before but the answer was no at the time: has Pijul matured sufficiently that it can import something like the Linux git repo (or a random date range of it at least) and successfully convert the revisions into a patch format?
Pijul is becoming really stable. Formats haven’t changed in months, and the last non-backwards-compatible change was 8 months ago.
We’re importing large repos fast and without a problem now, unlike a year ago where it often failed, or two years ago when it saturated the memory in one minute and had no theoretical chance of importing the first 1000 commits. Huge projects like Ruby or Python pass our conversion tests in reasonable time. Linux can probably be imported too, but since it is rather large, it isn’t part of our standard test suites.
One remaining issue, which will be solved in the next few months, is that really large repos can end up consuming too much disk space. Nothing unreasonable, but we have plans to make these repos very small on disk.
Do you mean large as in “containing individual files that are of large size” (git’s general problem) or does that include large repo history (like the FreeBSD repo, for example)?
Is it merely a problem with exploded textual representation such that filesystem-level compression would help?
I just want to say thanks for clarifying and I wish you the best of luck. I have been cheering for Pijul from the start because I think it’s ridiculous that “store a copy of every version of every file and throw away the context of the actual changes” is the status quo.
Is anything wrong with Nest lately? It seems very slow.
Any particular page that feels slow? It’s proxied by Cloudflare, so there could be caching issues.