1. 21

Used “ask” to show I’m just looking for conversation around this. I’m familiar with other git forges and how they operate, please keep this in mind while commenting :)

Edit: I realize too this could be for any underlying VCS. “SourceShed” is probably a better name :)


    1. 9

      I’m not sure I understand how your criticism of Github and how a way to interact with repositories via a web interface is somehow bad is followed by:

      The second way would be to copy the GitHub UX mostly.

      It’s not clear to me why you put SourceHut in the same category with Github, the workflows and capabilities depend very little of its web interface, most of the collaborative work is being done via email. If you host it, then it’s even “in your shed”… Actually it’s not clear to me at all what you expect from a source forge in the first place.

      1. 3

        Thank you for the comment :) For a little context, I wrote this, this morning, so there are naturally some gaps in my thinking. I wanted to get it out of my head quickly.

        The purpose of offering the second way is to continue to allow everyone else to feel comfortable at least browsing projects and developer profiles. I still want to work with people… So it’s tough satisfying everything at once with one solution. The solution is two solutions essentially :) You give the illusion they are on a GitHub-like website (it should look exactly like GitHub, as I mentioned).

        SourceHut is relatively complicated when compared to what I’m describing. It doesn’t copy GitHub’s UI/UX. I mention in the writing you can self-host which is nice, but doesn’t satisfy the rest of my “problems”.

        I expect a minimal source forge to 1. allow people to discover the software 2. allow people to download the software 3. permit people to push changes back.

        In no way could GitShed provide the services SourceHut does, or any other advanced source forge. GitShed is the bare minimum to provide a familiar source forge browsing experience.

        It’s not clear to me why you put SourceHut in the same category with Github, the workflows and capabilities depend very little of its web interface,

        The same can be said of any source forge if you start with “just give people the git URL”. Most of the time, this isnt the case: they start you off with the web URL to browse the project.

        How do you browse a developer’s other projects? How do you discover where to email? Most discovery happens on the SourceHut website.

        NOTE: I am NOT saying SourceHut couldn’t be used to build a minimal system, I’m well aware it was started to be exactly that. The thing is SourceHut wanted to be a complete solution. In my case this is a problem. I don’t need a complete solution. SourceHut is f’ing awesome for its reasons, as are most of these source forges :)

        Again thanks for the comment, I appreciate needing to elaborate on these thoughts.

        1. 3

          Not really related but I find interesting to mention ForgeFed, which is an activitypub extension to do decentralized project discovery.


          I agree that this does not solve the problem right now which you describe as “faking Github so people are not afraid”. It’s mostly: “invent something new to make Github looks not cool anymore”.

          Also, reading your txt blog since quite a long time, I’m curious about your thought on Gemini. You seem to share a lot of values with Gemininauts (like myself). A few have also started to tackle the problem of having a Forge on Gemini. And your blog would be completely at home on Gemini (or maybe best on Gopher as you seem to like to force your own line lengths ;-) )

          1. 2

            Yes, I find the idea behind ForgeFed awesome. In general I find all of ActivityPub very awesome. The weakness currently seems to be the software around it, but it’s just a matter of time. I can seriously see ActivityPub even replacing email and messaging simultaneously in the future. Being able to discover everyone and their projects (not just software but really anything) will be flipping awesome.

            Gemini, my thoughts very briefly: “come on guys why can’t we restrict HTTP”

            Gemini, the long story: http://len.falken.directory/perceived-relations-between-gopher-gemini-and-http.txt

            Forcing line lengths let you do typesetting otherwise not possible. In a sense I’m “baking in” all my typesetting and stylistic choices with plain text. I wouldn’t have been able to do this for example http://len.falken.directory/tool-sh.txt

            Thanks for mentioning being a long time reader :‘D - In my mind only those who comment are the people who read anything I write :) (Since I don’t use statistics any more. It’s why once in awhile I’ll google “len.falken.directory” to see if there’s any convos going on elsewhere.)

            1. 6

              Forcing line lengths let you do typesetting otherwise not possible

              At the cost of making it look really bad on phones as the user agent can’t really imply what you mean.

              1. 1

                Yep. It’s a “cost” I just gotta eat. :p I’ve tried a few things to resolve this, like switch to html when I really want people to read something. I even tried to write something to help reading http://len.falken.directory/improving-the-plain-text-experience-on-mobile-and-browsers.txt

                It’d be nice if some person at Google was assigned to make 80 column text to look nice on mobile Chrome :).

                1. 1

                  You could fetch content-type header when the extension isn’t specified. Browsers could then choose which type they actually want to get. Not saying it’s the worst experience–especially if doing a specific kind of expressiveness/aesthetic–but there are certainly trade-offs. I myself go for basic markup that must render well in TUI browsers or I find it unacceptable. Folks that prefer the txt-like layout will have a good experience more often if they default to using a TUI browser–it’s just that most web designers don’t consider the TUI experience. :(

                  1. 1

                    I have had crazy thoughts to create a txt -> html converter, that takes cues from my writing style to auto convert it into HTML. For example, the headers are usually in ALL CAPS on their own line. Lists are usually your typical single-character-then-space lines. Text that is within a margin of being in the center could be surrounded by center tags, etc. There is definitely a solution here.

                2. 1

                  There’s always the “format=flowed” for text/plain (RFC-3676), but I’m not sure if you want to go to that trouble.

            2. 2

              If I remember correctly, I’ve even shared some of your posts about plain-text on my own blog. (yep, it was… in 2020 : https://ploum.net/gemini-le-protocole-du-slow-web/index.html )

              Thanks for your post on Gemini. I think I understand where we disagree.

              1. I believe that everything is political hence it is important to have strong moral and political values in a project. I like the fact that Gemini is clearly announcing it without trying to advertise itself as a “technological” solution. I’m also really happy about how Gemini tuned to become a place feel safer to write without thinking about SEO, discoverability or being read by their whole family. People start to write random stream of consciousness thinking that nobody read them and they are happy when they receive an email. Nowadays, most people leaving Gemini are complaining about the fact that they don’t have statistics, comments or likes. They feel nobody read them. IMHO, this is a feature. This is excatly how it should be (from someone who removed every single statistic from his blog 10 years ago)

              2. I don’t like thinking about end of line. I like that it is automatic (and lament about the lack of flowed support in mail clients ;-) ). So I like the Gemtext format which has become the default for everything I do. If fixed width is needed, I put everything between ```lines (gemtext spec to open a non interpreted section).

              We have differences and lot in common. Really happy to randomly chat with you on this subject. Continue writing ;-)

              1. 1

                I don’t like thinking about end of line.

                I don’t think about it much since my editor just auto hard-wraps at column 80 :P

                The leaving Gemini because no statistics thing is pretty funny. You gotta write mostly for yourself, to get your ideas out of your head and to explore them. People reading it and commenting is a luxury tech has given us…!

                Really happy to randomly chat with you on this subject

                Same :) And I don’t think I mentioned yet but I’ve read your blog too a few times, even the French, since I’m trying to practice more these days x) A bientot!

              2. 1

                Ironically, the only place where the contents of my defunct gemsite are readable is in its GitHub repo.

                1. 1

                  That’s not ironic, that’s a cool feature, isn’t it?

                  My own blog is available through http, gemini and git: https://git.sr.ht/~lioploum/ploum.net

                  That way, I hope that it will survives me :-)

    2. 7

      Random train of conciousness responses:

      Websites are documents

      This feels like a conceptual mistake. Websites are documents, AND applications, AND containers for distributing applications. Contrast lobste.rs or a news website, vs Github, vs Sandspiel. It’s a spectrum, and the lines are fuzzy. And this is ok.

      The middle “application” part of the spectrum is honestly the most complicated; that’s where you have a lot of the complexity of web browsers, where they are a language runtime, rendering framework, event framework, RPC mechanism, etc. Replacing that functionality would get you… more or less a GUI toolkit, and one person alone isn’t going to rewrite Qt or GTK any more than Firefox or Chrome.

      Why do I need a whole portable OS to access what’s essentially an archive? Because of everyone else if I want to share my creations.

      That, on the other hand, is very perceptive. You can just stick a link to your public repo somewhere. Or you can use git daemon to let people browse your archive from a smol web app. But accepting pull requests this way is a pain in the ass, let alone discussing them. Mailing lists are the second best option for that afaict, and they suck. Having this stuff in the browser significantly lowers the boundary between “discover code” and “explore code” and “talk about code”.

      For me the strength of the github model isn’t hosting the git repos, it’s that if I’m reading a project’s docs then I’m one small step away from submitting a bug report or searching for existing ones.

      Why do I need what’s essentially a storage unit for my projects? I don’t, I can share projects with people “out of my shed”

      I think part of this is that sharing stuff out of your shed is significantly more work than using a storage unit. Not just for you, but for everyone else too. It’s the equivalent of giving everyone their own key to your shed, telling them how to get there, and training them to play nice with your guard dog, vs just renting a location that does all this for you.

      …That said, I quite like the RSS/atom feed idea. Maybe you could use something like that to do issues and discussions and PR’s as well? That solves part of the mailing list problem of “looking up existing state involves finding an archive and crawling through it somehow”, though federated Atom feeds containing people having conversations with each other… starts sounding like Mastodon tbh. I don’t know much about it, is there a way for Atom feeds to link to each other? Having a feed reader that can combine and coalesce multiple feeds into a conversation sounds fun.

      So… yeah. I guess my take-away is that the hard part of a Github-ish thing isn’t putting the git repos somewhere and letting people look at them, it’s letting people talk about them and contribute to them. The first part is covered by your code.xml file, albeit with a higher barrier to entry than there would be if I could click on one of the links and browse the contents. The second part is covered by the “Send patchs and bug reports to their email” statement at the top of that file, which frankly as a project maintainer I would never want to heckin’ deal with. A real issue tracker and some kind of PR-ish structure is way, way nicer than mongling a mailing list by hand.

      1. 2

        Excellent stuff here :) I find random direct thoughts pretty effective B)

        This feels like a conceptual mistake

        It’s a technical mistake we all decided to move forward with. At the end of the day a web browser is still a document renderer. Everything else is and has been ad-hoc - nothing new there.

        Pull requests … mailing lists … pain in my butt

        Yeah, so, I’ve found for the ~50 personal things I’ve written over 15 years, maybe 2 projects have resulted in some contribution.

        Accepting files or patches is pretty easy when you’re not dealing with 10+ PRs. If you’re hitting that many PRs frequently 100% what I’m suggesting is not suitable for the project in question. 100%.

        The RSS/ATOM feed works fantastically for discovery. The last “step” is making it standard for people to link their domain.com/code.xml. Doesn’t .well-known or similar files at the web root do this? Because then we’re all set. People can just share their domains and we can pull down what we want to see.

        talk about and contribute

        I’ve found most useful commentary is from a few moments: 1. when you show your friends / online colleagues (via irc, microblogging, etc) 2. when you post to feeds like lobsters or ycombinator 3. when people email you about it. Again, that’s my experience for my projects.

        I really like the idea of a ticket system feed or “bug feed”. I think better though would just be a TICKETS.txt that people can add to or search easily. Again I’ve had at most ~10 issues opened for any one project I used to have on GitHub.

        My current setup uses git daemon but this alone isn’t enough. It’d be really nice if there were a git command that could do all the administrative work to put your local git repo to a remote host and “invert” the source of truth to the remote end.

        1. 6

          My current setup uses git daemon but this alone isn’t enough. It’d be really nice if there were a git command that could do all the administrative work to put your local git repo to a remote host and “invert” the source of truth to the remote end.

          That is actually my favorite feature of sr.ht. If you run:

          git remote add origin git@git.sr.ht:~username/my-new-project
          git push --set-upstream origin main

          their server output will tell you that you just pushed to a repository that doesn’t yet exist, they’ll save your changes for the next 20 minutes, and it prints a URL you can visit to make it permanent.

          Having that command has removed almost all the friction in sharing repositories (for me).

    3. 5

      I’m curious about your thoughts on Radicle: https://radicle.xyz

      1. 2

        I’ve seen radicle before - it’s great, but not really aligning with what I’m mentioning here. I think radicle solves other problems at another level :) Thanks for mentioning it though!

      2. 1

        Seems a nice project, I don’t really understand it. Would it make sens to be implemented at the “forge” level (for example by sourcehut or gitlab) or is it designed for individual repos?

    4. 5

      Escaping from GitHub and landing in GitLab, Codeberg, SourceHut, is no better to me ideologically.

      UX aside that’s neglecting the fact that the business models and scale are dramatically different. The first are billion dollar for-profit enterprises, one is a single-person company and one is a pro-bono association.

      Depending on your priorities one or the other may be better. Different they are for sure.

      1. 1

        I mentioned they are all great services and still a large step in the right direction, didn’t I? I don’t think I neglected anything?…

    5. 4

      The second way would be to copy the GitHub UX mostly. This would exclude essentially everything minus the main code view. I’m not even sure I would allow people to see code; instead just rendering the README.md or .txt and showing the top level of files and directories, giving the illusion of the GitHub feel.

      Not sure how this is different than Gitea


      Also cgit if you want something even more minimal - https://git.zx2c4.com/cgit/

      1. 1

        Gitea is very close in terms of look for sure! Thanks for sharing the screenshots.

        Yep, I think cgit is closest thing as I mentioned in another comment.

        1. 1

          I’d keep an eye on Forgejo, it looks like there was a falling out between the maintainers and the business folks or something like that.

    6. 4

      Have you heard of Shithub? Gitshit reminded me of it. The Plan 9 folk mainly use it and although you can browse it on the web, the interface to it is implemented by writing files on a remote server.

      1. 3

        Sounds a bit like gitolite,

        At my previous job, just over 10 years ago, I set up a proof of concept git hosting service using gitolite, with gitweb for browsing. It was a bit unusual because it had a separate unix account running gitolite for each department or college or research group that wanted one. This meant that the gitolite administrivia was distributed, so my admin work was negligible.

        Part of the point of it was to demonstrate that there was a strong desire for git hosting in the university, even if the web UI was negligible and the admin UX was abominable. And it succeeded, even if it took 5 years for management to notice 🤓

      2. 1

        I have! The 9front guys always seem ahead of the game.

    7. 3

      I don’t know if I understand your post correctly, but I made something kinda similar recently - gituwa - generate static “main code view” + one screenshot (I believe it’s super important) + public git clone via https. Of course it lacks whole aspect of allowing other users to host repos :D

      I assume it will be FOSS - if you need some help message me.

      1. 2

        Yo… that is almost precisely what I’m looking for. It’s just missing GitHub styling! At least this is half the solution I was talking about. It’d be nice if the tool took care of making the repo bare and whatnot too :)

        FWIW I love the current styling. The GitHub styling requirement is purely to satisfy everyone else.

        You’ve definitely gone “further” by allowing project navigation!

    8. 2

      (Unfortunately this means all those open-source “git forges” too. They are all web-based. Escaping from GitHub and landing in GitLab, Codeberg, SourceHut, is no better to me ideologically. At least the alternatives you can self-host which is large step.)

      No, they’re not? They do provide a web UI, yes. But they also provide very comprehensive APIs, with which you can integrate them with your IDE or choice (where by IDE, I mean anything that’s extensible and/or programmable, VIM and Emacs included, not just VSCode and its browser/java-based ilk).

      A lot of developers do not interact with the GitHub/GitLab/Gitea/etc web ui. They use the integrations within their IDEs, and those are often quite fantastic. Partly because they’re well integrated into the IDE.

      The second way would be to copy the GitHub UX mostly.

      Don’t. Copy the API instead (or roll your own, just have an API). This lets people use it from anywhere, and it will look nice, because they’ll remain within their familiar environment: their IDE. No need to install anything, if you provide an API compatible with any of the existing forges: people can just point their IDE to yours, job done. And there’s no need for a web browser, either!

      All in all, I don’t see the appeal of Yet Another ForgeShed. I don’t see anything compelling in the article, sorry. Tell people to do things manually, and they’ll walk away.

      1. 1

        I checked out your profile just now, to see if you some how link a way to your code in a way that doesn’t involve a web browser: you don’t. https://git.madhouse-project.org/algernon/

        How am I expected to have discovered this without a web browser? How am I expected to look at these projects without a web browser? I have to click on each project then look for the clone link.

        A lot of devs only use GitHub / GitLab / etc web UI too. I would say half the time is spent rebasing, commiting in their IDEs/Terms, and the other half PR commenting.

        There are currently no “source sheds” at all. I’m more than glad if you’re aware of not just one but many :) Please share with us!

        The closest may be cgit. It would explain why a lot of people use cgit.

        1. 4

          I checked out your profile just now, to see if you some how link a way to your code in a way that doesn’t involve a web browser: you don’t.

          But I do! Every repository URL on my forge is a cloneable repo. Whenever I link to any of them, they’re both cloneable directly, and when viewed from a browser, they show my Forge’s web view. Judging by my logs, most visitors hit my forge directly at the various project repos, very rarely anywhere else.

          But even if they land anywhere else, my forge has an API. Every single page on my forge has a link to the API docs in the footer, too.

          How am I expected to have discovered this without a web browser?

          You can search all public users with the API:

          ❯ curl -s 'https://git.madhouse-project.org/api/v1/users/search?q=algernon' \
            | jq -r '.data[].username'

          How am I expected to look at these projects without a web browser?

          You can list my (public) repos with the API! And the response even includes a clone URL:

          ❯ curl -s 'https://git.madhouse-project.org/api/v1/users/algernon/repos?limit=1' \
            | jq -r '.[].clone_url'

          And you can do a whole lot more:

          You can look at the files in a repo:

          ❯ curl -s 'https://git.madhouse-project.org/api/v1/repos/algernon/telchar.org/contents' \
            | jq -r '.[].path'

          You can even look at files!

          ❯ curl -s 'https://git.madhouse-project.org/api/v1/repos/algernon/telchar.org/contents/README.org' \
            | jq -r '.content' | base64 -d
          # the actual contents of README.org

          …and you don’t need to check out the repository! You can discover a whole lot about it, before you clone it. You can read the README, and see if its something interesting, and then clone it. Can save you a lot of time and bandwidth! All without a browser, without needing to clone the repo. Just curl, jq, and coreutils.

          There’s a whole lot more the API lets you do, without any browser involvement at all. You can open issues, PRs, you can comment on either. You can check the status of CI runs, look at branches, releases, release notes, tags, and what have you. Some of those require an account - but even that is optional. You can opt to not use my forge to report an issue or send a PR. You can email me, too. My email is in the commit messages, and probably in the docs too, not very hard to find. This way, you don’t need an extra account on a random forge (though, Forge Federation will help a lot in that regard in the - hopefully - not too distant future!), and after discovering the API, no browser either.

          Can your Shed do the same? Can I browse contents of your repos, without cloning them, to discover more about them? If I can’t discover anything but the name of a repo, I’m not going to bother cloning it. If your Shed provides that on a web UI, then you need a browser to discover more about any given repo, while my forge provides an API, to do it browserless.

          There are some great tooling built on top of these APIs, very good IDE integrations. No browser involved.

          You don’t even need a browser to discover what the API can do, by the way! The entire API - including its documentation - is available in a neatly packed Swagger JSON! It’s not very human-friendly, but it’s readable. And even better: there is tooling to work with swagger/openapi-based APIs! Tools that can generate API clients based on this JSON, so you can work with it from your favourite programming language, and create the integrations you want. Tools that can generate documentation that you can read, in a format you want.

          There may even be existing API clients for your language! For example, the Gitea Go SDK is an official library for the API, for the Go language! And there’s plenty more, if code generation isn’t your cup of tea.

          So, basically, the only thing you need to know about my forge to be able to use it without a browser is that it’s a Forgejo instance, which is a Gitea fork, so you can use any kind of tooling available for Gitea. I suppose I could make that more obvious, like, include it in a HTTP header or something.

          With that said, every single page on my forge has both the “Powered by Forgejo” link, and a link to the API in its footer. I think that’s fairly discoverable, so you only ever need 1 visit to my site with a browser.

          Unfortunately, to browse the online API, you do need a JavaScript-enabled browser. But once you discover it’s Forgejo/Gitea, you don’t even need the online API browser. So, lynx, or even curl works fine, if you know what you’re looking for:

          ❯ lynx -dump https://git.madhouse-project.org/ | grep -E '/api|forgejo.org'
             3. https://forgejo.org/docs/latest/
           104. https://forgejo.org/
           106. https://git.madhouse-project.org/api/swagger

          So, pretty much the same requirements as discovering your code.xml, and your Shed, except my forge provides a whole lot more, and shares the API with any other Gitea and Forgejo instance, and has API libraries in a lot of languages, a lot of IDE integrations, CLI clients, and so on.

          A lot of devs only use GitHub / GitLab / etc web UI too. I would say half the time is spent rebasing, commiting in their IDEs/Terms, and the other half PR commenting.

          …all of that can be done from the comfort of one’s IDE. Modern IDEs have very good integrations. No need to use the GitHub web UI to comment on PRs. I can do that from within my IDE. Both with GitHub, and with my Forgejo/Gitea instance. No browser involved, my Emacs works fine without one.

          GitHub, GitLab, Gitea, Forgejo, and SourceHut all have powerful APIs, with which you can do pretty much everything, without a browser. The browser is only a requirement to discover the API, a requirement shared with your Shed idea.

          There are currently no “source sheds” at all. I’m more than glad if you’re aware of not just one but many :) Please share with us!

          Lets see… if you want something that doesn’t require a web browser: bare git repos on a webserver work quite well. They can be easily cloned, they don’t need anything server-side, they don’t need any additional tooling on the client side, either. For discovery, you can create a git repo with all the others as submodules, or even multiple repos with different selection of other repos as submodules.

          Clone the top one (without recursing into submodules), and you get a list of all available repositories. You can then clone whichever you wish, by updating that particular submodule, or cloning it directly.

          Put a README in it, and you’re done.

          If you want RSS/Atom feeds, then you will need something a little more dynamic on the server side (or you can pre-generate everything, but then you overcomplicate your local development setup, imo), so something like gitweb, gitlist, or cgit would make sense.

          Still, none of those provide the kind of integrations forges do.

          If you’re not expecting many contributions, and don’t need or want the convenience of the forges, then a bare git repo or cgit/gitweb/gitlist works fine. I still don’t see the point of something between bare git repos and cgit/etc. If you want something more than cgit/gitweb/gitlist, then there are the existing forges. Gitea/Forgejo works great for small installments, they run fine on a Raspberry Pi. GitLab… I guess GitLab has advantages here and there, too? I found it too heavy for my needs, so never looked deeper into it. The few times I had to interact with it, I used its API.

          But circling back to Sheds! You want something that is:

          1. Discoverable
          2. Doesn’t require a web browser
          3. Provides some kind of access to some files, without cloning, for discovery purposes (preferably without a browser)
          4. Provides enough information to clone repos
          5. Provides enough information to let potential contributors know how to contribute back
          6. Provides RSS/ATOM feeds

          Did I summarize it well?

          How about Gitea/Forgejo, then? Don’t expose the web ui, just the API, and build a minimalist UI on top of it yourself. The API lets people do almost anything (and you can disable the unnecessary parts of it too, simply by not exposing those routes), it provides programmatic discoverability, RSS/ATOM feeds, takes care of cloning too.

          Your own UI on top could provide a landing page, the README renders, and whatever else you may wish to expose to browser-users.

          Or, you could skip that step, and just expose the web UI. You can disable issues and pull requests, and document in your README how to contribute or report issues.

          This way, those who want to use a browser, can, and those who’d prefer an API, can do that too, and those who’d like a combination of the two can do so aswell. Those who’d rather email you rather than use issues or PRs (assuming you have issues / PRs enabled), can still do that. You get to satisfy a lot of people, without inventing anything new and unique without existing tooling or integrations.

        2. 3

          How am I expected to have discovered this without a web browser?

          I think it’s helpful if you break this question down. In your post, you say:

          • Chromium, a web browser, is ~37.7 million lines of expressions
          • Linux, the kernel, is ~27 million lines of expressions
          • There is no way in hell I could write my own web browser

          This is true of a full-featured web browser capable of implementing all the requirements of Github, but that’s not at all required to view https://git.madhouse-project.org/algernon/. I’ve tried loading this URL in netsurf and eww, two browsers written with non-megacorp-scale resources, and they look fine!

          Hell, you could even find the data you need using curl in a pinch.

          The problem isn’t “web sites”; the problem is JS applications.

          1. 3

            The problem isn’t “web sites”; the problem is JS applications.

            Yeah totally, Github’s new source viewer is dog slow IMO.

            This partly motivated me to write my own source viewer :-P

            http://travis-ci.oilshell.org/github-jobs/5169/cpp-small.wwz/_tmp/src-tree/www/index.html (this link may disappear in awhile, but every future release will have permalinks)

            I actually wrote syntax highlighters or “language segmenters” for 5 languages in re2c. That was kinda fun and I think the results actually look good (not too noisy)

            http://travis-ci.oilshell.org/github-jobs/5169/cpp-small.wwz/_tmp/src-tree/www/doctools/micro_syntax.re2c.h.html (e.g. it somewhat understands its own re2c blocks!)

            The next step is to do code outlining, probably with parser combinators over low-level lexers

            This was an experiment along the lines of semgrep and uchex, which I posted here


            Basically I think you can do a surprising amount of “polyglot” lexing/parsing, for Python / C++ / custom grammars and ASDL

            Github’s source viewer won’t be able to understand our custom languages like ASDL, and its navigation will always be approximate unless it actually builds your software, which it doesn’t. That is, “jump to definition” and “find all uses” are really equivalent to the halting problem, because software build systems are arbitrary code

            So if Github is going to be approximate and slow, I’m writing something approximate and fast. (And it’s not the only motivation; writing separate tools also feeds into the language design for YSH)

            We also use sourcehut, but it probably won’t get code outlining, and the syntax highlighting is only OK IMO


          2. 1

            Hell, you could even find the data you need using curl in a pinch

            This is not really ideal for anyone than someone who is desperate.

            A point to be made here is it’s necessary to navigate a set of documents to find things. With what I propose, there isn’t, you just look at code.xml and checkout the git repos you want. All which you can use curl and git.

            netsurf and eww together are still vastly more complicated than curl.

            The problem isn’t “web sites”; the problem is JS applications.

            I guess you mean websites with JS on them - definitely doesn’t help the problem!

    9. 2

      your proposed set of features sounds good to me. also I think things like issue trackers, pull requests, discussion threads etc are better done as separate tools rather than as part of the main tool.

      1. 1

        And to be fair that’s what SourceHut does from what I understand.

        1. 1

          ah, certainly

    10. 1

      One often overlooked feature from the distributed crowd is how useful it is for GitHub download URLs to be stable. This is especially true from a package manager’s standpoint.

      Whenever an upstream URLs goes broken, the vast majority is because the project was hosted on a personal domain that disappeared, or a sysadmin at a University did some cleanup (not scientific, but based on my personal experience). It’s hard to beat a well-maintained centralized system for that.

      Of course one day GitHub is going to break all download URLs and that’s going to be a big problem :)

    11. 1

      any underlying VCS

      I’ve said it before, but I’m pretty sure folks expect there to be a UI atop Git else it’s not useful to them. …And to be fair, I find rummaging thru files in these UIs for one-offs to be a lot simpler than checking out. That said, I feel as tho the best bet for trying to reshape the situation would be removing the Git part due to expectation even if there should not be any reliance on the GUI part. The patch-theory-based options, darcs & Pijul. They both allow users to work in a more decentralized manner since differing patch order still gets you to the same pristine hash at the end. A lot of the need for the centralization it would seem would be related to the merge conflicts for revision history where amends & order matter so folks like to point to the central server as the source of truth.

      1. 2

        And to be fair, I find rummaging thru files in these UIs for one-offs to be a lot simpler than checking out.

        Yeah, I’ve found the same, but I think it’s a system setup issue more than anything. Would it be simpler if you clicked “git://…” and your file explorer opened (so it downloaded in the background and loaded up)? Because I know I would. And then files would open in your local text editor or image viewer.

        I should probably devise a nice little script that people can use to enable this flow.

        1. 5

          Yeah, I’ve found the same, but I think it’s a system setup issue more than anything. Would it be simpler if you clicked “git://…” and your file explorer opened (so it downloaded in the background and loaded up)?

          I’d prefer not to. Cloning a repo is potentially far more resource intensive than viewing a file or two. I don’t want to clone an entire repo to look at its README (or LICENSE, or COPYING, or whatever), or its language-specific pacakage manager file to see if it pulls in crazy dependencies I’d rather not deal with.

          Sometimes I just want to look at the file list alone, and decide which one I want to have a quick look at, without having a local copy of the entire thing.

          Not to mention that at some point, that local download better be garbage collected. Or auto-updated: if I look at it a week later, I don’t want to see an obsolete version. But then what happens if the repo got force pushed? What happens if I want to look at the same file in different branches? Does it clone it twice? Does it clone it once, but checks out a different branch? What if I want both open? What if the repository is huge? Do I seriously need to wait for a ~500mb repository to clone just to view its README?

          It’s not as simple as “clone, display, done!”, I’m afraid. Not unless you have infinite diskspace, bandwidth, and time. I don’t have either.

          1. 1

            Ah. I was assuming that git:// would just do a shallow clone or whatever was minimal for viewing in a quick manner.

            1. 2

              You can do a shallow clone, yes, but that is still a lot more expensive than looking at one file, still requires to be garbage collected at some point, etc.

        2. 1

          That’s… a pretty good idea. I like it.