Threads for asjo

  1. 5

    I’m being a troll, but this just looks like: “These are the 21 projects written in Haskell used by more than 1 person,” which is just unfortunate. I wish there was larger adoption for Haskell (but I am as guilty as everyone else).

    1. 4

      EDIT: whoosh

      I and a group of people actually use a few of these. The most famous one being pandoc, cursed by the arch maintainers for its huge collection of dependencies which they ship as separate packages. I wrote my master thesis using pandoc. I believe Hakyll is supported by many static-site hosting services. xmonad is a highly loved alternative to wms like i3 and sway.

      1. 6

        I and a group of people actually use a few of these. The most famous one being pandoc

        I’d describe both pandoc and shellcheck as Haskell success stories. Pandoc is widely used in my circles.

        There was a great command-line podcast downloader written in Haskell called hpodder, which had a bit of uptake in the blind Linux community years ago. I think it has suffered software rot. I couldn’t get it to build when I last tried. A lovely, reliable program though.

        1. 4

          Do you think there’s still demand for that? I’d love to see more Haskell projects that are serving a particular userbase, and if it just needs to have some bitrot issues addressed and be updated to work with a newer compiler and libraries I could probably fork it and get it updated.

          1. 2

            Do you think there’s still demand for that?

            I honestly don’t know. Probably most people who were using it at the time have moved on, but it’s impossible to say.

          2. 2

            I’m still using hpodder - I tried to fix some utf-8 issues a decade or so ago, but couldn’t figure out how, so I just wrapped it in a couple off shellscripts :-)

            1. 1

              I’m still using hpodder - I tried to fix some utf-8 issues a decade or so ago, but couldn’t figure out how,

              Have you built it from source in a while, or are you just using an old binary that still works?

              1. 1

                I think it’s just a pretty old binary I have lying around. The timestamp is from January 2022. I also have an hpodder.old from 2019, so perhaps it is newer than I thought.

                The newest local commits I have in my clone of the original hpodder repo are from 2017:

                * 6429e6a 2017-10-18 Use FlexibleContexts 
                * 7ffaaf1 2017-10-18 Add --compressed to curl options.
                * 782effb 2015-06-06 Revert "Fix stripping of unicode byte order mark."
                * 737c69e 2015-06-06 Add network-uri to Build-Depends.
                * f78f053 2015-06-02 Apply hpodder-1.1.6-unix-2.7.patch from the gentoo-haskell repository.
                * dd32cb8 2015-06-02 Apply hpodder-1.1.6-haxml-1.22.patch from the gentoo-haskell repository.
                * 13364a3 2015-06-02 Apply hpodder-1.1.6-base-4.patch from the gentoo-haskell repository.
                

                Let me see if it builds (I’m on Debian 11 (stable, bullseye), GHC 8.8.4). Yes, it does build, with a couple of warnings.

                I can share the repo if anyone wants to clone it.

                1. 1

                  I can share the repo if anyone wants to clone it.

                  Please do.

                  1. 1
                    1. 1

                      Here it is: https://koldfront.dk/git/hpodder/

                      Thank you.

                      1. 1

                        If you make improvements, send me a pull request :-)

            2. 1

              As said below, there seems to be a high correlation of “successful / useful software project” to “written in Haskell.”

          3. 1

            I use xmonad on linux since more than a decade and I def. use shellcheck whenever I write production level scripts.

            1. 1

              Yes! The point I was making is that there’s a high correlation of “successful / useful open source project” and “written in Haskell.” The problem is that there is a fractional number of projects written in Haskell compared to other languages.

          1. 1

            I think that using cron instead of systemd would have been more elegant. Still, I like these kinds of little projects.

            1. 16

              Systemd timers are basically systemd’s replacement of cronjobs.

              It supports all cron features and adds a few fixes which were usually hacked around in crond:

              • you can use systemctl status to see when will the next timer will run. (In crond people were using crontab.guru)
              • it prevents cronjobs from overlapping/runnning over each other. (In crond people were using job wrapper which would fork/exec the original command and use a lock file)
              • it keeps logs of each run for later investigation in journald. (In crond people were using a wrapper to log into a log different log file, or they would use MAILTO=)

              I’m biased as a systemd-fanboy, but systemd timers are always a better choice than cronjobs IMHO.

              1. 3

                Why is it necessary to create both a .timer and a .service? That seems overkill for just having a service run at specfic times. With cron there is 1 entry.

                1. 3

                  Creating two files decouples the service definition from the timer definition. If you later want to call the service in another way, for example in response to a path change, you can just add a .path file.

                  1. 9

                    Another advantage of decoupling the service definition from the timer definition is that distro packages can provide the service definitions (in /usr), while leaving to the user/administrator the freedom to enable them or not and to let them run at whatever time they prefer (via timers in /etc) without having to modify the package-installed files.

                    Package-provided (ana)cron snippets are so cumbersome to modify without creating conflicts with future updates

                    For example: let’s decide to run foobar daily but at noon. foobar has been installed by the package in /etc/cron.daily. But /etc/crontab says that all daily tasks will be run at 18:00. Too late for me. What should I do?

                    • Remove /etc/cron.daily/foobar? That will cause problems during upgrades.
                    • Modify /etc/cron.daily/foobar with a sleep? That will cause problems during upgrades.
                    • Modify /etc/crontab to run the daily tasks at another time? That will change the time at which all daily tasks run and will cause problems during upgrades.
                    • Add a file to /etc/cron.d? OK, but then how do I stop the cron line in /etc/cron.daily/foobar from being executed?

                    Thanks to the decoupling of service and timer definitions, I can simply drop

                    [Timer]
                    OnCalendar=
                    OnCalendar=12:00
                    

                    in /etc/systemd/system/foobar.timer.d/run-at-noon.conf and call it done. No need to change any distro-provided file.

                    1. 2

                      Creating two files decouples the service definition from the timer definition.

                      Yes, but when it is a service that only makes sense as a timer, it’s just putting the information in multiple places.

                      1. 4

                        It might seem redundant in this case (and I’d have used a crontab entry too) but if you’re managing many machines it’s convenient to only have on service file on each and vary the timer file contents per machine/role rather than having to keep multiple versions of crontabs in sync.

                      2. 2

                        It’s a massive annoyance. You also can’t have multi-line commands in the service file. The tooling is lacking since you can’t just generate the service and timer files.

                        1. 5

                          You also can’t have multi-line commands in the service file.

                          You can’t have multi-line commands in crontabs, either.

                          1. 16

                            I shall henceforth coin the phrase “Unix is in the eye of the beholder” to refer to this phenomenon where, if crontab can’t do something, that’s an opinionated tool following the Unix philosophy, but if systemd doesn’t do it, it’s missing a vital feature.

                            1. 4

                              You can have multiple commands per cron entry

                              12 * * * *  command1 ; command 2; if foo ; then echo 1  ; else echo 2 ; fi 
                              

                              Or do you literally mean commands split across multiple lines?

                              1. 2
                                ExecStart=/bin/bash -c 'command1 ; command 2; if foo ; then echo 1  ; else echo 2 ; fi'
                                

                                So?

                                You can also have aby number of ExecStartPre and ExecStartPost, if suitable.

                      3. 1

                        As far as I know, MAILTO is exactly a feature that it doesn’t support.

                        Disregarding that this deficiency can be hacked around.

                        1. 3

                          @acatton never said that systemd supports the MAILTO. They said that it must be used (or overused) if we want to retain the logs from the job, systemd provides us that feature automatically and with logs forwarding it is IMHO much easier and much more resilient to manage than MAILTO incantations.

                          1. 1

                            I’m disproving this overly bold, careless statement that you might not have noticed:

                            It supports all cron features

                            I can’t imagine anything easier to use than MAILTO. Set it once, everything becomes monitored.

                    1. 2

                      What would a blog post about programming languages be without a stab at Perl? sigh

                      1. 3

                        It would be fun to hear how the author assumed time zone information was managed - by a committee at the United Nations?

                        1. 9

                          News is trickling out on https://micronews.debian.org/ as well

                          1. 6

                            Fun timing - I was just telling myself that I should stop not recommending/advocating Emacs to my colleagues.

                            Usually I shy away from doing so, explaining that I like Emacs a lot, but not going further than that. I think primarily because I don’t want the experience of them being disappointed with Emacs and abandoning it after trying it on my recommendation.

                            But I need to remind myself that I will be seeing them using Microsoft Visual Studio Code instead, because that certainly has a lot of advocacy… shudder.

                            1. 1

                              While I don’t actively advocate anyone to switch from any other editor, I do generally share some of the Emacs findings/tricks I discover… it often generates enough interest over time. Folks eventually try it out on their own time and come to me with questions. Having someone accessible to clear initial bumps seems to be pretty handy. It may get folks enjoying the experience sooner and,perhaps make it all that much more sticky.

                              1. 2

                                This is the same for everything in Programming though. It’s largely something you can learn yourself, but having a network of people to prod for questions is extremely useful.

                            1. 1

                              Cute idea - it would be fun to be able to run it from a pre-push hook, or something, stopping people from pushing bad commit messages >:-)

                              1. 1

                                I use hep, followed by yay. For random text I also use ABBA followed by FLAPPA.

                                When people are debugging by printing out stuff, I tell them to include some easily recognizable text. There is nothing worse than 10 lines of values and you don’t know which is which… there are enough things to keep in your head when you’re debugging! (And it’s nice to be able to search for it as well.)

                                I have started telling people to come up with their own random word - that’s an easy way to tell who left some debug somewhere by accident. Examples that sprintg to mind are are “hatt”, “strawberry” and “popcorn” :-)

                                1. 1

                                  I tried this with my old fav slrn but I do need something that handles HTML format

                                  https://imgur.com/a/OQXE42V

                                  1. 2

                                    You can configure slrn to run articles through html2text and a little s-lang, description at the bottom of the page here: https://feedbase.org/documentation/#slrn

                                    It isn’t super fast, but it looks pretty good: https://koldfront.dk/misc/lobstersslrn.png

                                    1. 1

                                      From the README, there are two groups:

                                      • lobsters - “Multipart HTML/Plain UTF-8 QP”, and
                                      • lobsters.plain - that is “Plain UTF-8 QP”

                                      Try out lobsters.plain I guess?

                                      1. 2

                                        I had recently added that. I’m tempted to turn .plain into ISO-8859-1 for the nasty legacy clients.

                                    1. 3

                                      Looks pretty good in Gnus: https://koldfront.dk/misc/gnus/lobstersnntp.png

                                      Nice job!

                                      Would be even better if it was read/write :-) But since the API doesn’t allow that, it would be nice with a link to the comments on the website, and perhaps also make the story link clickable?

                                      How often does it update? I’ve posted this comment and another one on the website, but they don’t seem to have shown up in the nntp-gateway yet…

                                      1. 4

                                        The link to the object on the site itself in an X-Lobsters header. (There’s some additional X-Lobsters-* headers. Working on adding some more, but it does require schema changes, which probably cause a reset for article numbering if I mess it up.)

                                        It updates every hour… or it should, anyways. (Oops, there’s a bug in that. Let me fix it.)

                                      1. 10

                                        I refer you to The Only M1 Benchmark That Matters - how long does it take to compile Emacs! :-) (Spoiler: the M1/clang does well.)

                                        1. 4

                                          But how long does it take to compile Vim and save the kids in Uganda?

                                        1. 1
                                          • DNS (bind9)
                                          • Mail (Postfix, sqlgrey, opendkim, opendmarc, Dovecot)
                                          • IM - XMPP (ejabberd)
                                          • Web
                                          • Calendar/Contacts (CalDav)
                                          • Atom/RSS to nntp gateway (homemade)
                                          • Video-conferincing (Jitsi)

                                          All of it on my home server (except Jitsi), with two tiny VPS’s for DNS and mail-server redundancy. Everything on Debian stable.

                                          1. 1

                                            There is a generic postfix-sasl.conf in filters.d which will catch these, as well as other failed login attempts, if enabled - in case you want not just to trigger on “Password:”

                                            1. 10

                                              code review often results in me having to amend or squash my commit(s).

                                              Why? What is wrong with fixing whatever needs to be fixed in new commits?

                                              Sure, amend, squash, modify before you push, but after that, don’t, and you avoid a whole class of problems.

                                              You might argue that the history will look “messy”, yes, perhaps, but it also reflects what actually happened, which can be a good thing.

                                              1. 19

                                                git history should tell a story, i don’t want to see your typos, unless it’s in the main branch, then it’s written in stone

                                                1. 3

                                                  I don’t see why. VC history can be an almost arbitrary mess!

                                                  The thing which really matters is that you get your job done.

                                                  As long as you have a decent way to

                                                  1. find semantically connected commits (e.g. look at the merge of a PR, or a ticket ID in the commit messages) and
                                                  2. find out who to ask when you have questions about some code (some version of blame)

                                                  you should be good. At least, that is all I ever needed from a VCS. I would be interested in hearing about other use-cases, though.

                                                  In general, people are wasting too much time these days cleaning up their commit history.

                                                  1. 5

                                                    as somebody regularly doing code archeology in a project that is now 16 years old and has gone through migrations from CVS to SVN to git, to git with people knowing how to rebase for readable histories, I can tell you that doing archeology in nice single-purpose commits is much nicer than doing archeology within messy commits.

                                                    So I guess it depends. If the project you’re working on is a one-off, possibly rewritten or sunset within one or two years, sure, history doesn’t matter.

                                                    But if your project sticks around for the long haul, you will thank yourself for not committing typo fixes and other cleanup commits.

                                                    1. 4

                                                      it CAN be, but that’s what we’re trying to avoid
                                                      you can get your job done either way, and cleaning up git history doesn’t take a lot of time if you think properly from the beginning. Any additional time i do spend can easily be easily justified by arguing for better documented changes.

                                                      1. sure
                                                      2. you should not have to ask anyone, some aggregation of context, commit messages, and comments should answer any questions you have

                                                      having a mistake you introduced, as well as the fix for that mistake in the same branch before merging into a main branch is just clutter… unnecessary cognitive load. If you use git blame properly, it’s yet another hoop you have to jump through and try to find out the real reason behind a change. Now, there are exceptions. Sometimes i do introduce a new feature with a problem in a branch, and happen to discover it and fix it in the same branch (usually it’s because the branch is too long lived, which is a bad thing). I do, sometimes, decide that this conclusion is important to the narrative, and decide to leave it in.

                                                      1. 2

                                                        I mean…I would agree in principle except for “cleaning up git history doesn’t take a lot of time”. I think that is only true if you have already invested a lot of time into coming up with your branching and merging and squashing model and another lot of time figuring out how to implement it with your tools.

                                                        I have probably more cognitive overhead from reading blog posts on to-squash-or-not-squash et al. than I could get from ignoring “fix typo” commits in a lifetime. ;)

                                                        1. 4

                                                          “cleaning up source history” is such an ingrained part of my work flow, that seeing you dismiss it because it’s too costly reads similarly to me was, “I don’t have time to make my code comprehensible by using good naming, structure and writing good docs.” Which you absolutely could justify by simply saying, “all that matters is that you get the job done.” Maybe. But I’d push back and say: what if a cleaner history and cleaner code makes it easier to continue doing your job? Or easier for others to help you do the job?

                                                          FWIW, GitHub made this a lot easier with their PR workflow by adding the “squash and merge” option with the ability to edit the commit message. Otherwise, yes, I’ll checkout their branch locally, do fixups and clean up commit history if necessary.

                                                          1. 1

                                                            I could make that argument. But I didn’t because it is not the same thing.

                                                            This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history. And - also answering @pilif here - this includes a medium sized (couple million lines of code), 30 year old project that had been rewritten in different languages twice and the code base at that time consisted of 5 different languages.

                                                            (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                                            1. 5

                                                              But I didn’t because it is not the same thing.

                                                              Sure, that’s why I didn’t say they weren’t the same. I said they were similar. And I said they were similar precisely because I figured that if I said they were the same, someone would harp on that word choice and point out some way in which they aren’t the same that I didn’t think of. So I hedged and just said “similar.” Because ultimately, both things are done in the service of making interaction with the code in the future easier.

                                                              This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history.

                                                              I guess it seems obvious to me. And it’s especially surprising that you literally haven’t found any use for it, despite people listing some of its advantages in this very thread! So I wonder whether you’ll even see my examples as valid. But I’ll give a try:

                                                              • I frequently make use of my clean commit history to write changelogs during releases. I try to keep the changelog up to date, but it’s never in sync 100%, so I end up needing to go through commit history to write the release notes. If the commit history has a bunch of fixup commits, then this process is much more annoying.
                                                              • Commit messages often serve as an excellent place to explain why a change was made. This is good not only for me, but to be able to point others to it as well. Reading commit messages is a fairly routine part of my workflow: 1) look at code, 2) wonder why it’s written that way, 3) do git blame, 4) look at commit that introduced it. Projects that don’t treat code history well often result in disappointing conclusion to this process.
                                                              • A culture of fixup commits means that git bisect is less likely to work well. If there are a lot of fixup commits, it’s more likely that any given commit won’t build or pass tests. This means that commit likely needs to be skipped while running git bisect. One or two of these isn’t the end of the world, but if there are a lot of them, it gets annoying and makes using git bisect harder because it can’t narrow down where the problem is as precisely.
                                                              • It helps with code review enormously, especially in a team environment. At $work, we have guidelines for commit history. Things like, “separate refactoring and new functionality into distinct commits” make it much easier to review pull requests. You could make the argument that such things should be in distinct PRs, but that creates a lot more overhead than just getting the commit history into a clean state. Especially if you orient your workflow with that in mind. (If you did all of your work in a single commit and then tried to split it up afterwards, that could indeed be quite annoying!) In general, our ultimate guideline is that the commits should tell a story. This helps reviewers contextualize why changes are being made and makes reviewing code more efficient.

                                                              (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                                              Well, the charitable interpretation would be that I do it because I find it to be a productive use of my time. Just like I find making code comprehensible to be a good use of my time.

                                                              And no, clean source history of course requires dedicated effort toward that end. Just like writing “clean” code does. Neither of these things come for free. I and others do them because there is value to be had from doing it.

                                                              1. 1

                                                                Thanks, this is more useful for discussing. So from my experience (in the same order):

                                                                1. I could see this as being useful. I simply always used the project roadmap + issue tracker for that.
                                                                2. Absolutely, I wasn’t trying to argue against good commit messages.
                                                                3. I understand that fix-up commits can be a bit annoying in this respect so if you can easily avoid them you probably should. On the other hand I need git bisect only very rarely and fix-up commit are often trivial to identify and ignore. Either by assuming they doesn’t exist or by ignoring the initial faulty commit.
                                                                4. I am totally in favor of having refactoring and actual work in separate commits. Refactorings are total clutter. Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                                                I mean, maybe this is just too colored by how difficult I imagine the process to be. These arguments just seem too weak in comparison to the cognitive burden of knowing all the git voodoo to clean up the history. Of course if you already know git well enough that trade-off looks different.

                                                                1. 1

                                                                  The git voodoo isn’t that bad. It does take some learning, but it’s not crazy. Mostly it’s a matter of mastering git rebase -i and the various squash/fixup/reword/edit options. Most people on my team at work didn’t have this mastered coming in, but since we have a culture of it, it was easy to have another team member hop in and help when someone got stuck.

                                                                  The only extra tooling I personally use is git absorb, which automates the process of generating fixup commits and choosing which commits to squash them back into automatically. I generally don’t recommend using this tool unless you’ve already mastered the git rebase -i process. Like git itself, git absorb is a convenient tool but provides a leaky abstraction. So if the tool fails, you really need to know how to git rebase yourself to success.

                                                                  It sounds painful, but once you have rebase mastered, it’s not. Most of my effort towards clean source history is spent on writing good commit messages, and not the drudgery of making git do what I want.

                                                                  It sounds like we are in some agreement on what’s valuable, so perhaps we were just thinking of different things when thinking about “clean source history.”

                                                                  Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                                                  Indeed, it is. Often because the code needs to be modified in a certain way to make it work. That’s why our commit history guidelines are just guidelines. If someone decided it was more convenient to just combine refactoring and semantic changes together, or maybe they just didn’t plan that well, then we don’t make them go back and fix it. If it’s easy to, sure, go ahead. But don’t kill yourself over it.

                                                                  The important bit is that our culture and guidelines gravitate toward clean history. But just like clean code, we don’t prioritize it to the expense of all else. I doubt few others who promote clean code/history do either.

                                                                  N.B. When I say “clean code,” I am not referring to Bob Martin’s “Clean Code” philosophy. But rather, just the general subjective valuation of what makes code nice to maintain and easy to read.

                                                  2. 5

                                                    For a stacked diff flow, this is necessary https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/

                                                    1. 4

                                                      If you are just going to duplicate your original commit message for the new work, why not amend the original commit? Branches are yours to do with as you please until someone else may have grabbed the commit.

                                                      1. 2

                                                        Sure, amend, squash, modify before you push

                                                        It’s not about push. It’s about sharing. I push to many branches that aren’t being or intended to be shared with others. Thus, it’s okay to rewrite history and force push in those cases.

                                                      1. 11

                                                        I use Emacs’ vc-annotate (C-x w g) to get the initial blame shown, and then I can inspect the commit log (l) and diff (=), and I can jump to the commit of the current line (j). To move to the previous commit (p) is then easy, making it possible to trace the history back, while showing log and diff when necessary, as I am jumping further back.

                                                        1. 1

                                                          I also like that a lot, but I’ve found it can’t cross merge commits (any commit with more than 1 parent). Is there a way around that?

                                                        1. 3

                                                          I wonder where the particular new limit of 2468 comes from. A 64-bit timestamp should allow dates hundreds of billions of years into the future, so clearly that’s not exactly the data structure being used to store timestamps here.

                                                          1. 11

                                                            The article gives the answer:

                                                            This “big timestamps” feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter

                                                            and

                                                            a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486

                                                            Wolfram Alpha calculating 2^64 nanoseconds from December, 1901 gives July, 2486: https://www.wolframalpha.com/input/?i=2%5E64+nanoseconds+from+1901-12-31

                                                            Note: it is 2486, not 2468.

                                                          1. 35

                                                            Videos are very slow for conveying this type of information - the text could just have said:

                                                            • Install ublock origin, if you haven’t already
                                                            • Click the ublock origin icon
                                                            • Click the “Open the dashboard” button
                                                            • Under “Annoyances” turn on “EasyList Cookie”
                                                            • Click “Apply changes”
                                                            1. 5

                                                              My apologies for inconveniencing you by conveying information too slow(;

                                                              Having said that, a video is quiet convenient for conveying information on a Youtube channel - and to show exactly the kind of problem with cookie popups that I wanted to show: no way to opt out, in the way of reading the content, that the tracking stops when set up correctly.

                                                              1. 4

                                                                You can do both, just put a TLDW in the description…

                                                              2. 3

                                                                While I agree and generally prefer a good blog post a video, it all comes down to a matter of opinion. Some people just prefer watching videos over reading a post for whatever reason. I’ve seen people asking specifically for videotutorial help before.

                                                                In this case the video can help people to know exactly what steps and movements to follow, to easily find what the user needs to do, with a concrete example.

                                                              1. 1

                                                                The Perl bashing in the article feels quite outdated to me.

                                                                Also:

                                                                C++ could have beaten Perl by 10 years to become the world’s second write-only programming language

                                                                Wikipedia lists C++ as being from 1985 and Perl from 1987, so I guess C++ would have done so by two and not ten years. Unless it is supposed to be a base two joke.

                                                                1. 5

                                                                  It is unfortunately in keeping with the general style of the entire article: a cheap polemic that paints people who generally don’t agree, and their apparently “conservative” choices, as some kind of pantomime villain. The only footnote is a reference to another polemic in a similar vein, from nearly a decade prior.

                                                                1. 19

                                                                  The last footnote includes the conclusion for practical use:

                                                                  “To be fair, the asymptotic behaviour of Bloom’s original bound is consistent with this updated definition, so the impact is more on an issue of pedantry rather than for practical applications.”