Threads for asjo

  1. 2

    What would a blog post about programming languages be without a stab at Perl? sigh

    1. 3

      It would be fun to hear how the author assumed time zone information was managed - by a committee at the United Nations?

      1. 9

        News is trickling out on https://micronews.debian.org/ as well

        1. 6

          Fun timing - I was just telling myself that I should stop not recommending/advocating Emacs to my colleagues.

          Usually I shy away from doing so, explaining that I like Emacs a lot, but not going further than that. I think primarily because I don’t want the experience of them being disappointed with Emacs and abandoning it after trying it on my recommendation.

          But I need to remind myself that I will be seeing them using Microsoft Visual Studio Code instead, because that certainly has a lot of advocacy… shudder.

          1. 1

            While I don’t actively advocate anyone to switch from any other editor, I do generally share some of the Emacs findings/tricks I discover… it often generates enough interest over time. Folks eventually try it out on their own time and come to me with questions. Having someone accessible to clear initial bumps seems to be pretty handy. It may get folks enjoying the experience sooner and,perhaps make it all that much more sticky.

            1. 2

              This is the same for everything in Programming though. It’s largely something you can learn yourself, but having a network of people to prod for questions is extremely useful.

          1. 1

            Cute idea - it would be fun to be able to run it from a pre-push hook, or something, stopping people from pushing bad commit messages >:-)

            1. 1

              I use hep, followed by yay. For random text I also use ABBA followed by FLAPPA.

              When people are debugging by printing out stuff, I tell them to include some easily recognizable text. There is nothing worse than 10 lines of values and you don’t know which is which… there are enough things to keep in your head when you’re debugging! (And it’s nice to be able to search for it as well.)

              I have started telling people to come up with their own random word - that’s an easy way to tell who left some debug somewhere by accident. Examples that sprintg to mind are are “hatt”, “strawberry” and “popcorn” :-)

              1. 1

                I tried this with my old fav slrn but I do need something that handles HTML format

                https://imgur.com/a/OQXE42V

                1. 2

                  You can configure slrn to run articles through html2text and a little s-lang, description at the bottom of the page here: https://feedbase.org/documentation/#slrn

                  It isn’t super fast, but it looks pretty good: https://koldfront.dk/misc/lobstersslrn.png

                  1. 1

                    From the README, there are two groups:

                    • lobsters - “Multipart HTML/Plain UTF-8 QP”, and
                    • lobsters.plain - that is “Plain UTF-8 QP”

                    Try out lobsters.plain I guess?

                    1. 2

                      I had recently added that. I’m tempted to turn .plain into ISO-8859-1 for the nasty legacy clients.

                  1. 3

                    Looks pretty good in Gnus: https://koldfront.dk/misc/gnus/lobstersnntp.png

                    Nice job!

                    Would be even better if it was read/write :-) But since the API doesn’t allow that, it would be nice with a link to the comments on the website, and perhaps also make the story link clickable?

                    How often does it update? I’ve posted this comment and another one on the website, but they don’t seem to have shown up in the nntp-gateway yet…

                    1. 4

                      The link to the object on the site itself in an X-Lobsters header. (There’s some additional X-Lobsters-* headers. Working on adding some more, but it does require schema changes, which probably cause a reset for article numbering if I mess it up.)

                      It updates every hour… or it should, anyways. (Oops, there’s a bug in that. Let me fix it.)

                    1. 10

                      I refer you to The Only M1 Benchmark That Matters - how long does it take to compile Emacs! :-) (Spoiler: the M1/clang does well.)

                      1. 4

                        But how long does it take to compile Vim and save the kids in Uganda?

                      1. 1
                        • DNS (bind9)
                        • Mail (Postfix, sqlgrey, opendkim, opendmarc, Dovecot)
                        • IM - XMPP (ejabberd)
                        • Web
                        • Calendar/Contacts (CalDav)
                        • Atom/RSS to nntp gateway (homemade)
                        • Video-conferincing (Jitsi)

                        All of it on my home server (except Jitsi), with two tiny VPS’s for DNS and mail-server redundancy. Everything on Debian stable.

                        1. 1

                          There is a generic postfix-sasl.conf in filters.d which will catch these, as well as other failed login attempts, if enabled - in case you want not just to trigger on “Password:”

                          1. 10

                            code review often results in me having to amend or squash my commit(s).

                            Why? What is wrong with fixing whatever needs to be fixed in new commits?

                            Sure, amend, squash, modify before you push, but after that, don’t, and you avoid a whole class of problems.

                            You might argue that the history will look “messy”, yes, perhaps, but it also reflects what actually happened, which can be a good thing.

                            1. 19

                              git history should tell a story, i don’t want to see your typos, unless it’s in the main branch, then it’s written in stone

                              1. 3

                                I don’t see why. VC history can be an almost arbitrary mess!

                                The thing which really matters is that you get your job done.

                                As long as you have a decent way to

                                1. find semantically connected commits (e.g. look at the merge of a PR, or a ticket ID in the commit messages) and
                                2. find out who to ask when you have questions about some code (some version of blame)

                                you should be good. At least, that is all I ever needed from a VCS. I would be interested in hearing about other use-cases, though.

                                In general, people are wasting too much time these days cleaning up their commit history.

                                1. 5

                                  as somebody regularly doing code archeology in a project that is now 16 years old and has gone through migrations from CVS to SVN to git, to git with people knowing how to rebase for readable histories, I can tell you that doing archeology in nice single-purpose commits is much nicer than doing archeology within messy commits.

                                  So I guess it depends. If the project you’re working on is a one-off, possibly rewritten or sunset within one or two years, sure, history doesn’t matter.

                                  But if your project sticks around for the long haul, you will thank yourself for not committing typo fixes and other cleanup commits.

                                  1. 4

                                    it CAN be, but that’s what we’re trying to avoid
                                    you can get your job done either way, and cleaning up git history doesn’t take a lot of time if you think properly from the beginning. Any additional time i do spend can easily be easily justified by arguing for better documented changes.

                                    1. sure
                                    2. you should not have to ask anyone, some aggregation of context, commit messages, and comments should answer any questions you have

                                    having a mistake you introduced, as well as the fix for that mistake in the same branch before merging into a main branch is just clutter… unnecessary cognitive load. If you use git blame properly, it’s yet another hoop you have to jump through and try to find out the real reason behind a change. Now, there are exceptions. Sometimes i do introduce a new feature with a problem in a branch, and happen to discover it and fix it in the same branch (usually it’s because the branch is too long lived, which is a bad thing). I do, sometimes, decide that this conclusion is important to the narrative, and decide to leave it in.

                                    1. 2

                                      I mean…I would agree in principle except for “cleaning up git history doesn’t take a lot of time”. I think that is only true if you have already invested a lot of time into coming up with your branching and merging and squashing model and another lot of time figuring out how to implement it with your tools.

                                      I have probably more cognitive overhead from reading blog posts on to-squash-or-not-squash et al. than I could get from ignoring “fix typo” commits in a lifetime. ;)

                                      1. 4

                                        “cleaning up source history” is such an ingrained part of my work flow, that seeing you dismiss it because it’s too costly reads similarly to me was, “I don’t have time to make my code comprehensible by using good naming, structure and writing good docs.” Which you absolutely could justify by simply saying, “all that matters is that you get the job done.” Maybe. But I’d push back and say: what if a cleaner history and cleaner code makes it easier to continue doing your job? Or easier for others to help you do the job?

                                        FWIW, GitHub made this a lot easier with their PR workflow by adding the “squash and merge” option with the ability to edit the commit message. Otherwise, yes, I’ll checkout their branch locally, do fixups and clean up commit history if necessary.

                                        1. 1

                                          I could make that argument. But I didn’t because it is not the same thing.

                                          This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history. And - also answering @pilif here - this includes a medium sized (couple million lines of code), 30 year old project that had been rewritten in different languages twice and the code base at that time consisted of 5 different languages.

                                          (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                          1. 5

                                            But I didn’t because it is not the same thing.

                                            Sure, that’s why I didn’t say they weren’t the same. I said they were similar. And I said they were similar precisely because I figured that if I said they were the same, someone would harp on that word choice and point out some way in which they aren’t the same that I didn’t think of. So I hedged and just said “similar.” Because ultimately, both things are done in the service of making interaction with the code in the future easier.

                                            This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history.

                                            I guess it seems obvious to me. And it’s especially surprising that you literally haven’t found any use for it, despite people listing some of its advantages in this very thread! So I wonder whether you’ll even see my examples as valid. But I’ll give a try:

                                            • I frequently make use of my clean commit history to write changelogs during releases. I try to keep the changelog up to date, but it’s never in sync 100%, so I end up needing to go through commit history to write the release notes. If the commit history has a bunch of fixup commits, then this process is much more annoying.
                                            • Commit messages often serve as an excellent place to explain why a change was made. This is good not only for me, but to be able to point others to it as well. Reading commit messages is a fairly routine part of my workflow: 1) look at code, 2) wonder why it’s written that way, 3) do git blame, 4) look at commit that introduced it. Projects that don’t treat code history well often result in disappointing conclusion to this process.
                                            • A culture of fixup commits means that git bisect is less likely to work well. If there are a lot of fixup commits, it’s more likely that any given commit won’t build or pass tests. This means that commit likely needs to be skipped while running git bisect. One or two of these isn’t the end of the world, but if there are a lot of them, it gets annoying and makes using git bisect harder because it can’t narrow down where the problem is as precisely.
                                            • It helps with code review enormously, especially in a team environment. At $work, we have guidelines for commit history. Things like, “separate refactoring and new functionality into distinct commits” make it much easier to review pull requests. You could make the argument that such things should be in distinct PRs, but that creates a lot more overhead than just getting the commit history into a clean state. Especially if you orient your workflow with that in mind. (If you did all of your work in a single commit and then tried to split it up afterwards, that could indeed be quite annoying!) In general, our ultimate guideline is that the commits should tell a story. This helps reviewers contextualize why changes are being made and makes reviewing code more efficient.

                                            (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                            Well, the charitable interpretation would be that I do it because I find it to be a productive use of my time. Just like I find making code comprehensible to be a good use of my time.

                                            And no, clean source history of course requires dedicated effort toward that end. Just like writing “clean” code does. Neither of these things come for free. I and others do them because there is value to be had from doing it.

                                            1. 1

                                              Thanks, this is more useful for discussing. So from my experience (in the same order):

                                              1. I could see this as being useful. I simply always used the project roadmap + issue tracker for that.
                                              2. Absolutely, I wasn’t trying to argue against good commit messages.
                                              3. I understand that fix-up commits can be a bit annoying in this respect so if you can easily avoid them you probably should. On the other hand I need git bisect only very rarely and fix-up commit are often trivial to identify and ignore. Either by assuming they doesn’t exist or by ignoring the initial faulty commit.
                                              4. I am totally in favor of having refactoring and actual work in separate commits. Refactorings are total clutter. Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                              I mean, maybe this is just too colored by how difficult I imagine the process to be. These arguments just seem too weak in comparison to the cognitive burden of knowing all the git voodoo to clean up the history. Of course if you already know git well enough that trade-off looks different.

                                              1. 1

                                                The git voodoo isn’t that bad. It does take some learning, but it’s not crazy. Mostly it’s a matter of mastering git rebase -i and the various squash/fixup/reword/edit options. Most people on my team at work didn’t have this mastered coming in, but since we have a culture of it, it was easy to have another team member hop in and help when someone got stuck.

                                                The only extra tooling I personally use is git absorb, which automates the process of generating fixup commits and choosing which commits to squash them back into automatically. I generally don’t recommend using this tool unless you’ve already mastered the git rebase -i process. Like git itself, git absorb is a convenient tool but provides a leaky abstraction. So if the tool fails, you really need to know how to git rebase yourself to success.

                                                It sounds painful, but once you have rebase mastered, it’s not. Most of my effort towards clean source history is spent on writing good commit messages, and not the drudgery of making git do what I want.

                                                It sounds like we are in some agreement on what’s valuable, so perhaps we were just thinking of different things when thinking about “clean source history.”

                                                Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                                Indeed, it is. Often because the code needs to be modified in a certain way to make it work. That’s why our commit history guidelines are just guidelines. If someone decided it was more convenient to just combine refactoring and semantic changes together, or maybe they just didn’t plan that well, then we don’t make them go back and fix it. If it’s easy to, sure, go ahead. But don’t kill yourself over it.

                                                The important bit is that our culture and guidelines gravitate toward clean history. But just like clean code, we don’t prioritize it to the expense of all else. I doubt few others who promote clean code/history do either.

                                                N.B. When I say “clean code,” I am not referring to Bob Martin’s “Clean Code” philosophy. But rather, just the general subjective valuation of what makes code nice to maintain and easy to read.

                                2. 5

                                  For a stacked diff flow, this is necessary https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/

                                  1. 4

                                    If you are just going to duplicate your original commit message for the new work, why not amend the original commit? Branches are yours to do with as you please until someone else may have grabbed the commit.

                                    1. 2

                                      Sure, amend, squash, modify before you push

                                      It’s not about push. It’s about sharing. I push to many branches that aren’t being or intended to be shared with others. Thus, it’s okay to rewrite history and force push in those cases.

                                    1. 11

                                      I use Emacs’ vc-annotate (C-x w g) to get the initial blame shown, and then I can inspect the commit log (l) and diff (=), and I can jump to the commit of the current line (j). To move to the previous commit (p) is then easy, making it possible to trace the history back, while showing log and diff when necessary, as I am jumping further back.

                                      1. 1

                                        I also like that a lot, but I’ve found it can’t cross merge commits (any commit with more than 1 parent). Is there a way around that?

                                      1. 3

                                        I wonder where the particular new limit of 2468 comes from. A 64-bit timestamp should allow dates hundreds of billions of years into the future, so clearly that’s not exactly the data structure being used to store timestamps here.

                                        1. 11

                                          The article gives the answer:

                                          This “big timestamps” feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter

                                          and

                                          a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486

                                          Wolfram Alpha calculating 2^64 nanoseconds from December, 1901 gives July, 2486: https://www.wolframalpha.com/input/?i=2%5E64+nanoseconds+from+1901-12-31

                                          Note: it is 2486, not 2468.

                                        1. 35

                                          Videos are very slow for conveying this type of information - the text could just have said:

                                          • Install ublock origin, if you haven’t already
                                          • Click the ublock origin icon
                                          • Click the “Open the dashboard” button
                                          • Under “Annoyances” turn on “EasyList Cookie”
                                          • Click “Apply changes”
                                          1. 5

                                            My apologies for inconveniencing you by conveying information too slow(;

                                            Having said that, a video is quiet convenient for conveying information on a Youtube channel - and to show exactly the kind of problem with cookie popups that I wanted to show: no way to opt out, in the way of reading the content, that the tracking stops when set up correctly.

                                            1. 4

                                              You can do both, just put a TLDW in the description…

                                            2. 3

                                              While I agree and generally prefer a good blog post a video, it all comes down to a matter of opinion. Some people just prefer watching videos over reading a post for whatever reason. I’ve seen people asking specifically for videotutorial help before.

                                              In this case the video can help people to know exactly what steps and movements to follow, to easily find what the user needs to do, with a concrete example.

                                            1. 1

                                              The Perl bashing in the article feels quite outdated to me.

                                              Also:

                                              C++ could have beaten Perl by 10 years to become the world’s second write-only programming language

                                              Wikipedia lists C++ as being from 1985 and Perl from 1987, so I guess C++ would have done so by two and not ten years. Unless it is supposed to be a base two joke.

                                              1. 5

                                                It is unfortunately in keeping with the general style of the entire article: a cheap polemic that paints people who generally don’t agree, and their apparently “conservative” choices, as some kind of pantomime villain. The only footnote is a reference to another polemic in a similar vein, from nearly a decade prior.

                                              1. 19

                                                The last footnote includes the conclusion for practical use:

                                                “To be fair, the asymptotic behaviour of Bloom’s original bound is consistent with this updated definition, so the impact is more on an issue of pedantry rather than for practical applications.”

                                                1. 3

                                                  The article would have been a lot more constructive if it gave some examples of better alternatives for the various projects mentioned.

                                                  1. 18

                                                    Are you suggesting they should say something like

                                                    What To Use Instead?

                                                    To replace GPG, you want age and minisign.

                                                    To replace GnuTLS or libgcrypt, depending on what you’re using it for, you want one of the following: s2n, OpenSSL/LibreSSL, or Libsodium.

                                                    which they said at the bottom of the article?

                                                    1. 2

                                                      Except Age/Minisign is not a GPG replacement?

                                                      1. 5

                                                        Age replaces file encryption. Minisign replaces signatures.

                                                        Read https://latacora.micro.blog/2019/07/16/the-pgp-problem.html

                                                        A Swiss Army knife does a bunch of things, all of them poorly. PGP does a mediocre job of signing things, a relatively poor job of encrypting them with passwords, and a pretty bad job of encrypting them with public keys. PGP is not an especially good way to securely transfer a file. It’s a clunky way to sign packages. It’s not great at protecting backups. It’s a downright dangerous way to converse in secure messages.

                                                        Back in the MC Hammer era from which PGP originates, “encryption” was its own special thing; there was one tool to send a file, or to back up a directory, and another tool to encrypt and sign a file. Modern cryptography doesn’t work like this; it’s purpose built. Secure messaging wants crypto that is different from secure backups or package signing.

                                                        You may think you want some cryptographic Swiss Army knife that “truly” replaces GPG, but what you really want is secure, single-purpose tools for replacing individual use cases that use modern cryptography and have been extensively reviewed by cryptography and security experts.

                                                        1. 2

                                                          What tool handles the identity and trust mechanism that GPG providing?

                                                          With the multi-tool approach, the user has to re-establish the web of trust every time and learn about each disconnected tools as well.

                                                          1. 2

                                                            What tool handles the identity and trust mechanism that GPG providing?

                                                            I hear webs of trust don’t work. Not sure why, but I believe it has to do with the difficulty of changing your root key if it ever becomes compromised.

                                                            Otherwise, maybe something like minisign, or even minisign itself, could help?

                                                            1. 1

                                                              Trust in what context?

                                                              For code-signing, I designed https://github.com/paragonie/libgossamer

                                                      2. 1

                                                        Totally agreed. But hey, a blog article poo-pooing a thing is much easier to write than one constructively criticizing it and offering solutions. And who has the time these days?

                                                        On a related note, it was once a guaranteed way to get your latest blog article to the top of the orange site if the title contained something like, “Foobar: You’re Doing it Wrong” or “We Need Talk About Foobar”. Phrases like this are the equivalent of “One Weird Trick” headline clickbait for devs.

                                                        1. 8

                                                          Pretty sure the article offers solutions. It’s at the very bottom though.

                                                      1. 3

                                                        Microsoft Outlook is definitely worse than Thundebird when it comes to handling inline images with MIME.