1. 3

    Note that it is replacing the old URL https://jlk.fjfi.cvut.cz/arch/manpages/

    1. 1

      Is it normal if it’s just a blank page? 🤔

      1. 1

        Once you have a reasonable volume of pull requests the branches of those pull requests get outdated quickly meaning they have to be rebased before merging or every change will have an extra merge commit in the history.

        And why are merge commits bad again?

        1. 9

          For feature branches a merge commit that updates against the default branch is just noise in the commit history. This is especially bad if the branch needed to be updated multiple times. In my opinion it is always better to rebase against the default branch to keep the commit history clean. Rebasing before a merge is often good practice anyways, e.g. to squash commits or rewrite commit messages.

          1. 2

            Indeed, I consider a feature branch being merged into a main branch without a merge commit to be an antipatten that makes the history less useful.

            1. 5

              This is not about the merge commit of the feature in main, its about merge requests in the feature branch when updating against main.

              1. 1

                Oh, I see. Yeah, I usually treat feature branches as roughly a patch series so keep merges out of that particular kind of flow, personally.

              2. 3

                The history should capture as much of the human process as possible without also encoding the details of how git creates that history.

                Thus, rebases and not merge commits.

                1. 1

                  If you really want to keep track of merges, you can use git rebase and then git merge --no-ff.

                  If a single feature may be developed and integrated progressively, having merge commits will add a lot of useless commits in the history, it’s an aesthetic choice that’s all.

              1. 3

                I’ve been taking a look at email-based flows but I somehow find it quite limiting when compared to something like GitLab. Let me just list a few points in no particular order:

                • How to marry email-based flows and CI? For instance, how to show CI status and how to prevent merges if CI fails?
                • How do I approve a merge (after reviewing it) but have it merged automatically upon CI completion?
                • How to associate project issue numbers in email messages? My email client has no idea what #123 means.
                • How to lock and moderate discussions on particular patches? Wouldn’t I need to be the mailing list admin for that?
                • How to make sure that old patches (which in merge request terms are outdated because someone force-pushed the feature branch) are marked as such? I imagine I’d need to send emails to all threads that currently have old patches and somehow invalidate those. Is this an automatic thing that I’m unaware of? I imagine it might get tiresome fast.

                Maybe I’m just super ignorant about the current state of email-based flow automation that exists which is quite possible. Though from my current naive standpoint, it would appear like email-based flow removes a lot of the existing automation and mechanisms that are in place in GitHub/GitLab and put all of that burden on the maintainer or contributors.

                1. 2

                  In theory, few of these are particularly hard:

                  How to marry email-based flows and CI? For instance, how to show CI status and how to prevent merges if CI fails?

                  If the email workflow goes to a mailing list, it’s easy to subscribe a bot to that list and have it apply the patch and reply to the message with the CI results.

                  How do I approve a merge (after reviewing it) but have it merged automatically upon CI completion?

                  A bot subscribed to the list can read a signed (S/MIME / GPG / Whatever you prefer) email from one of the designated maintainers and handle the merge.

                  How to associate project issue numbers in email messages? My email client has no idea what #123 means.

                  There’s nothing (other than DKIM and friends) stopping your mailing list from rewriting the messages to include hyperlinks to your issue tracker.

                  How to lock and moderate discussions on particular patches? Wouldn’t I need to be the mailing list admin for that?

                  Yes, but presumably you would be list admin on mailing lists for the project for which you are maintainer. That said, preventing someone resurrecting a thread on a mailing list is basically impossible: you can auto-bounce messages with the same title and thread ID, but there’s nothing stoping someone starting a new thread. On the other hand, the same applies on GitHub: just because you close and lock an issue or PR doesn’t prevent someone from filing an identical one.

                  How to make sure that old patches (which in merge request terms are outdated because someone force-pushed the feature branch) are marked as such? I imagine I’d need to send emails to all threads that currently have old patches and somehow invalidate those. Is this an automatic thing that I’m unaware of? I imagine it might get tiresome fast.

                  I’m not quite sure what you mean here, but it’s easy to make a bot automatically post a follow-up email.

                  By the time that you’ve done all of this; however, you’re depending heavily on a mail client with a good threading display and you’ve implemented a large chunk of a GitHub / GitLab / GOGS / whatever system. The question that you’d have to ask yourself is what value you get from this that you wouldn’t get from the web-based version.

                  One potentially interesting intermediate step might be to write an IMAP proxy to something like GitHub, which would expose all of the PRs / Issues as threads.

                  1. 1

                    Thanks for the answers. I suppose I should have phrased my questions better. In general I was not so much interested in whether it was technologically feasible to achieve what I asked but rather which off-the-shelf tools one would use in order to get those. SourceHut probably comes close but it seems that one is then also tied to their platform.

                  2. 1

                    I’d like to have an answer on this as well.

                  1. 10

                    Nice view on how the email flow works. Though, I don’t agree with some things.

                    The only reason that merge button was used on Github was because Github can’t seem to mark the pull request as merged if it isn’t done by the button.

                    No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push. In fact, I primarily host repos at GitLab and keep a mirror on GitHub. I accept PRs on GitHub (the mirror) as well to keep things easy for contributors. I manually merge these locally, and push the updated branch to GitLab. GitLab in turn syncs the GitHub mirror, and the PR on GitHub is marked as merged in a matter of seconds.

                    …we have to mess with the commits before merging so we’re always force-pushing to the git fork of the author of the pull request (which they have to enable on the merge request, if they don’t we first have to tell them to enable that checkbox)

                    Yes, of course you’ve to mess with them. But after doing that, don’t even bother to push to the contributors branch. Just merge it into the target branch yourself and push. Both GitLab and GitHub will instantly mark the PR as merged. It is the contributors job to keep his branch up to date, and he doesn’t even have to for you to be able to do your job.

                    I understand that you like the email workflow, which is great. But I don’t agree with some arguments for it that are made here.

                    Thanks for sharing though!

                    1. 7

                      No, it does. I merge locally all the time, and GitHub instantly marks a PR as merged when I push.

                      In the article they talk about wanting to rebase first. If you do that locally, GitHub has no way to know that the rebased commits you pushed originally came from the PR, so it can’t close them automatically. It does work when you push outside GitHub without rebasing tho.

                      1. 2

                        IIRC, can’t you rebase, (force) push to the PR branch, then merge and push and it’ll close? More work in that case but not impossible. Just if you rebase locally then push to ma(ster|in) then github has no easy way to know the pr is merged without doing fuzzy matching of commit/pr contents which would be a crazy thing to implement in my opinion.

                        1. 3

                          Typically the branch is on someone else’s fork, not yours.

                          1. 2

                            In Github, you can push to anothers branch if they have made it a PR in your project. Not sure if force push works, never tried. But I still feel it’s a hassle, you need to set up a new remote in git.

                            In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                            1. 3

                              In Gitlab, apparently, you have to ask the branch owner to set some checkbox in Gitlab so that you can push to the branch.

                              That is the case in GitHub as well. (Allow edit from maintainers). It is enabled by default so I’ve never had to ask someone to enable it. Maybe it is not enabled by default on GitLab?

                              1. 1

                                I can confirm that it is disabled by default on GitLab.

                    1. 2

                      Too bad I didn’t answer to this on time, because I have a similar story to share. At work we moved to Mercurial + Phabricator to Git + GitHub for a project (client choice).

                      We use trunk based development, meaning very short-lived feature branches. This is close to what you are looking for (fast-forward merge only).

                      With Phabricator, patches are sent, which is very similar to the email workflow.

                      However, GitLab/GitHub interface is easier to understand for beginners, and even Phabricator which is a web interface is complicated.

                      For the transition between Mercurial and Git (only the trunk got migrated first), I had to import patches with git am and I can say that it is not so easy, even with -3 it may refuses to do a 3-way merge, that should be able to apply. In comparison, git rebase --onto works perfectly for rebasing a short-lived branch (you use remotes for getting patches locally if needed).

                      I think what is important is to have a standardized workflow, and not a proprietary one pushed by GitHub. Having a client in a terminal for patch reviews and some kind of tool to automate rebases and complicated tasks (Git is difficult to use well after all) would be a good compromise to using emails (if it’s just for transport, you don’t need to know that email is or isn’t used). Then this could be applied to GitLab, GitHub, or anything.

                      1. 1

                        I recently started writing code in Go and I don’t understand what is changing.

                        I have packaged a Go application where I needed to extract sources in a specific directory in $GOPATH/src and it has a big Makefile that takes cares of downloading dependencies and building the application, so everything else was hidden for me.

                        Now, when I write an application, I don’t have my code in $GOPATH, and the directory structure there just seems to be a big dependency cache, which is versioned (much better than with Python where you have to use virtualenvs to avoid conflicts, or with native dependencies in e.g. C where it’s even harder to work with).

                        So what is changing for me, of for others? What was the advantage of having the code sitting in $GOPATH? Are you using separate $GOPATH directories for each application you develop on?

                        I have to use a go.mod file for managing dependencies, and it’s nice to have packaging managed directly by the language distribution, but the thing that troubles me is how a package is named after where their sources are located, which may not be right for an application having different source channels (e.g. private/public mirrors). I see two tricky solutions to this problem: patching go.mod files with replace directives, or maintain a directory structure with relative paths in replace directives (but you don’t always work on every dependency in a project). Then I understand why one would have their code sitting in $GOPATH/src.

                        1. 1

                          I currently use different software and services to get news in different topics:

                          • Tiny Tiny RSS, an RSS/Atom aggregator, with many subscriptions (single developer blogs, company blogs, news aggregators, comics)
                          • social networks (Twitter, Reddit, Mastodon)
                          • services to share links (this website, a private Discord server with friends…)

                          This is too much, I often miss important news or read things I could avoid reading at all.

                          There are also sources I follow that could be used to trigger actions automatically:

                          • release notes of software could be used to trigger rebuild and deploy to my servers
                          • new comic releases could be republished to some places

                          I realize that all this stuff could be assembled to build a framework that I could use to retrieve information from different sources, normalize it, sort it by importance, trigger actions. Some things I’d like to do:

                          • be able to programmatically add a source which produces certain types of content (this could be important daily news, an article on a topic, a tutorial, a comic, a software release, a message sent by someone, etc…)
                          • detect that multiple contents refers to the same piece of information, which happens frequently when following multiple news aggregators
                          • be notified of some important things like an uncommon topic becoming hot on different social platforms (e.g. related to world tensions, etc…)
                          • have a single front-end to follow news (Tiny Tiny RSS suffers from a few issues)
                          • centralize everything I’ve read in a single place (history on different browsers, social media links and likes, etc…)

                          I’ve looked at different software that could provide some of needed features (e.g. Weboob), but my conclusion is that I need to write specifications for a type system for applications to be based on, so that I could adapt existing libraries and front-ends to build a larger project (since I will never be able to build everything from scratch).


                          Recently, I’ve been thinking that for implementing content aggregators and convertors, the FAAS “paradigm” seems promising so I’m looking at knative although this project isn’t stable yet. Coupled with GitLab CI, I feel like I could easily deploy working code and finally start making something (it’s been at least 4 years I’m writing ideas around and talking about it).

                          1. 18

                            A trimmed down soft fork of firefox. Removing everything not necessary. Including everything marketing/saas related like sync, pocket, studies, random advertising, and so on. All the unnecessary web api stuff like “payment requests”, “battery”, “basic authentication”. Removing APIs that don’t need support anymore like ftp.

                            My goal here is to move towards something that doesn’t have behavior I don’t like (nagware and privacy leaks), and has a smaller surface area for security vulnerabilities.

                            The hardest problem with this is keeping it easy to sync up with firefox for the code that you haven’t deleted (using an old version isn’t an option because of security). My initial work on this (which I don’t anticipate having time to pick back up) was a system that stored “patches” in a version control system, were a patch was a directory that contained a list of files to delete/create. Diffs to apply. And scripts to run. This meant it was substantially less of a problem dealing with the inevitable “merge conflicts” since the system could be substantially smarter than a diff tool (e.g. instead of applying a regex once and making a diff, I could make a script that said “apply this regex everywhere”).

                            1. 5

                              In terms of reducing surface area, I’d love to see a build without WebRTC, or even websockets. Things like the recent report about ebay doing local machine portscans seemed to be viewed as poor behavior on their part rather than a fundamental firewall breach allowed by overzealous incorporation of poorly thought out standards. The web is rapidly becoming insecure by design, but the whole reason to use the web is to provide a sandbox for untrusted remote content.

                              1. 2

                                If I was developing it, I would be pretty worried that a change like that would break too much of the web to be useful. A “better” browser is only useful if it actually has users.

                                You’d have to actually measure how many sites that would break to be sure. One option might be (if it was reasonably easy to maintain, not sure) to throw it all behind a webcam like permission that needs to be given to each site and is requested upon use.

                              2. 1

                                There is a fork called Pale Moon. The issue is that the code base is too large for the maintainers and it probably suffers from very old memory hazard bugs that don’t apply to Firefox anymore (some recent CVE still apply to Pale Moon). Also it doesn’t implement many new APIs (WebExtensions, WebRTC, etc…), which is a design choice you suggest.

                              1. 1

                                <html lang="en" style="display: none" > seriously?

                                1. 1

                                  Is that relevant to the article? Or is that just source for the site?

                                  1. 3

                                    Had to remove the attribute to actually read the article.

                                1. 1

                                  You might be tempted to reinvent the wheel rather than contributing to an existing project. When does it become interesting to contribute?

                                  1. 7

                                    I’ve heard several people say that they won’t start a business if the idea is taken. But if one person earns money with a hotdog stand, it does not make it a bad idea to start another?

                                    Similarly, there is ample room for multiple open source projects. Having several types of one application can be a strength, since over time, one of them may become the most useful or popular one.

                                    I believe there is always room for not only innovation, but also twists on existing ideas.

                                    If there ever is to be a better wheel, people should allow themselves to experiment with wheel reinventions.

                                    1. 2

                                      I think it boils down to 2 things:

                                      1. There’s no way competition isn’t accepted. Even gmail has competition. As long as you can take a slice of the market that is enough for you, you’re good to go.

                                      2. There’s actually very little chance your value proposition is 100% the same as your said competition. You might be offering a very similar service, but not targeting the same population, or having a slightly different set of features, or even different Terms and Conditions.

                                      This, as you point out, applied to open source is actually even better, because most people aren’t reaching for a market, they do what they thinks is best and the barrier of entry is very low.

                                      In open source, I like to believe it’s no competition, it’s challenging peer projects :)

                                    2. 5

                                      When reinventing the wheel is too difficult and would take ages.

                                      I was excited to reinvent my own Wayland compositor, spent some time writing Rust bindings to libweston, got a little demo working, realized that I would probably never get to something full-featured with all the bells and whistles, and went on to discover Wayfire and join its development :)

                                    1. 2

                                      Another idea that I’ve seen is highlighting differently the current context, for example color hovered function and gray out others.

                                      I guess you could write plugins to have all these options available, but it would become messy to maintain everyone’s need.


                                      PS: Yet another static blog requiring JS to display static pages. 🙁 Rendered HTML is included in a <noscript> element but the CSS makes it broken.

                                      1. 1

                                        PS: Yet another static blog requiring JS to display static pages. 🙁 Rendered HTML is included in a element but the CSS makes it broken.

                                        Works for me, with uMatrix and all JS blocked by default. The page doesn’t have margins, which makes it look a bit ugly, but the text is readable.

                                      1. 6

                                        I don’t understand why a lot of websites now require JavaScript to load images, such as Medium. This is a basic feature from web browsers!

                                        I heard about “lazy loading” but I never had any issue with pages with a lot of images (except maybe, very long pages ? this should be a browser issue to handle). Recent browsers support loading=lazy attribute.

                                        1. 3

                                          It’s mainly for tracking and collecting data. See those videos that popup on news sites that have no relation to what you are reading there? A lot of times that’s what it is. Analytics. It’s harder to block those on videos.

                                          1. 2

                                            Trying to improve we make things worse..

                                          1. 3

                                            Haha, it’s funny, I build Firefox with export MOZ_REQUIRE_SIGNING=0 so I wasn’t affected.

                                            1. 2

                                              This is amazing because I was looking for this project a few months ago, one of the member of the project is from my school and I met them, but I wasn’t kept in touch.

                                              1. 3

                                                I removed Disqus because I might have gotten 3 comments in total during the time I had it.

                                                If someone wants to discuss something on my blog, they can DM me on Twitter ;)

                                                1. 1

                                                  I wonder if it’s possible to leverage Twitter as a sort of comments section. Once you publish an article you post a link on Twitter, then embed that tweet into the page along with its replies.

                                                  Of course embedding a tweet will come with its own bloat which doesn’t solve OPs issues (including privacy), but it could be an interesting alternative to using Disqus.

                                                  It’s probably better to just link to the tweet and have people go to Twitter to respond.

                                                  1. 2

                                                    You don’t control twitter, and you shouldn’t trust twitter not to censor your comments about your blog, or other people’s comments about your blog, on their platform. Twitter does this all the time for all sorts of reasons. This policy also forces any potential commenters to have a twitter account, which means they have to either give twitter their phone number, or go to some effort to spoof one. There was one time when I wanted to talk to someone I knew from university years ago after seeing a blog post of theirs. Their only public contact information was twitter, and trying to sign up for twitter in order to send them a brief hello was how I discovered the phone number thing.

                                                    1. 1

                                                      The requirement for having a Twitter account is a valid concern.

                                                      On the other hand, anyone who allows completely anonymous comments is asking for trouble. When I used Disqus, commenters had to have a Disqus identity (which is separate from an account, you could auth with other services - part of Disqus’ value-add).

                                                      Twitter is a bad example for this, for many reasons, but I personally would not have any problems enforcing that commenters to my blog had to have a Github account, for example. This is similar to my personal informal requirements for extending a Lobsters invite.

                                                      1. 2

                                                        Just remember that any external service that you outsource your commenting authorization to is an external service that could screw with your potential commenters in any way they like. If you believe that Github will always make exactly the same decisions about who they will allow on their service as you will about who you will allow to comment on your blog, then you have more faith in Github than I do.

                                                    2. 2

                                                      I thought about this, and I think the only suitable way of doing it, would be to have a script to add in each blog page, that loads a server-side program.

                                                      The deploy script would publish on Twitter/Mastodon/others and save the social network links to a database (like in a text file in the blog repository itself, for easy backup?) so that the server-side program is able to check and load comments on the blog page.

                                                      But you’ll have to deal with moderation. That’s why I didn’t code that although I got the idea. If you want to accept or refuse comments before they are shown on your blog, you can also use that time to copy and paste comments from these social networks manually, or use a script just for that task.

                                                      1. 1

                                                        Maybe… with some hacky plumbing that would be removed the next time Twitter decides to change their API…

                                                        I guess an IFTTT to publish a tweet, then a link to that “discuss this article on Twitter!” would work, for some values of work.

                                                    1. 3

                                                      Respectfully, is that something an org can brag about?

                                                      The time-to-patch metric heavily depends on the nature of the bug to patch.

                                                      I don’t know the complexity of fixing these two vulns, surely fixing things fast is something to be proud of, but if they don’t want people pointing fingers at Mozilla when a bug stays more than one week in the backlog, don’t brag about it when it doesn’t in the first place.

                                                      1. 18

                                                        Assuming that the title refers to fixing and successfully releasing a bugfix, a turnaround of less than 24 hours is a huge accomplishment for something like a browser. Don’t forget that a single CI run can take several hours, careful release management/canarying is required, and it takes time to measure crash rates to make sure you haven’t broken anything. The 24 hours is more a measure of the Firefox release pipeline than the developer fix time; it’s also a measure of its availability and reliability.

                                                        1. 10

                                                          This. I remember a time when getting a release like this out took longer than a week. I think we’ve been able to do it this fast for a few years now, so still not that impressive.

                                                        2. 6

                                                          As far as I can tell, the org isn’t bragging; the “less than 24h” boast is not present on the security advisory.

                                                          1. 1

                                                            To be fair, you’re right.

                                                          2. 2

                                                            also the bugs are not viewable - even if logging in

                                                            so its hard to get any context on this

                                                            1. 2

                                                              Is it possible to check the revisions between both versions, and they do not seem so trivial.

                                                              These are the revisions (without the one that blocks some extensions):
                                                              https://hg.mozilla.org/mozilla-unified/rev/e8e770918af7
                                                              https://hg.mozilla.org/mozilla-unified/rev/eebf74de1376
                                                              https://hg.mozilla.org/mozilla-unified/rev/662e97c69103

                                                              1. 1

                                                                Well, sorta-the-same but with the context is them fixing pwn2own security vulnerabilties with less than 24 hours 12 months ago

                                                                https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

                                                              2. 2

                                                                Respectfully, is that something an org can brag about?

                                                                I always assume it’s P.R. stunt. Double true if the product is in a memory-unsafe language without lots of automated tooling to catch vulnerabilities before they ship. Stepping back from that default, Mozilla is also branding themselves on privacy. This fits into that, too.

                                                                EDIT: Other comments indicate the 24 hrs part might be editorializing. If so, I stand by the claim as a general case for “we patched fast after unsafe practices = good for PR.” The efforts that led to it might have been sincere.

                                                              1. 2

                                                                I’m @Exagone313@share.elouworld.org, you can add me directly from this link.

                                                                I run my own (private) instance of Mastodon since when it became really known, I don’t publish much and I prefer to republish toots or sometimes links. Obviously I see less activity than on Twitter (where I follow more accounts), but with the recent addition of relays, it seems to be better. But there are sane communities compared to Twitter, that’s why I keep both (Twitter for daily information, Mastodon for a “safe” space to talk).

                                                                1. 3

                                                                  Since I needed to have a space to write some code snippets, configuration settings, etc… I made a page for that on my (recently made) blog (built with Jekyll). It’s just a beginning, not really made for others: things need to be improved, it misses a menu for example.

                                                                  1. 2

                                                                    Since I use Gandi as a registrar for my domain names, one of the oldest registrars in France (that supports free software and associative projects), and that it offers mail services (for people, not bulk sending email) for no additional cost, I use Gandi Mail for my new accounts (haven’t completely migrated from GMail).

                                                                    They offer 2 mail boxes per domain names (5 for older customers with existing domains), with infinite number of aliases per mail box that support wildcards (e.g. *@example.com for easily using one email per account in a single mail box).

                                                                    They support Sieve rules/filters (e.g. when the built-in anti-spam is not enough, or if you want to automatically send responses).

                                                                    They also have a paid plan if you want more storage.

                                                                    1. 1

                                                                      I use Fastmail for my primary inbox because gandi doesn’t push messages, but all my secondary accounts go through gandi - fantastic service given it’s free with the domain registration.

                                                                      1. 1

                                                                        What do you mean by “doesn’t push messages”? It feels like they are actually.

                                                                        1. 1

                                                                          On ios, at least, I’m using mail.gandi.net with IMAP.

                                                                          Mail for that account never arrives in the background (only after I open mail.app) whereas my other accounts deliver mail immediately regardless of what else I’m doing.

                                                                          1. 1

                                                                            I don’t have this issue on Thunderbird and K-9 Mail.