Threads for codesections

  1. 4

    I have a utilities library for another language and I’ve found that the grab-bag, misc nature of it has made people reluctant to use it. Any advice?

    1. 6

      My advice is that people will use your library if it’s less work to find it, understand it, integrate it, and consume it, when compared to writing their own utility functions. For a utilities library, this is highly unlikely to be the case.

      The remarkable thing about left-pad to me is that these conditions somehow became true, which is a huge credit to npm.

      1. 2

        Any advice?

        Well, seeing as I released this package literally today, I might not be the best person to give advice – maybe no one will use _ either!

        That said, here are a few things that I’d look for in a utilities package and that I’m aiming to deliver with _:

        • How much do I trust the quality of the code? For _, I’m emphasizing the shortness of the code and hoping that reading it will get people to trust the quality (this depends both on them being willing to read it and on them thinking it’s well-written). But this could also be based on the reputation of the author/professional backing, test coverage, etc.
        • Does the library seem likely to be well maintained? Part of the point of a utility library is to simplify common tasks. But if the library becomes abandoned, it would have the opposite effect. For _, I’m trying to address this by putting thought into the library’s future and being transparent about my thoughts. (See, e.g., the 5,000+ word OP)
        • Will the library maintain backwards compatibility or break when I need it most? As with the previous bullet, a utility library that breaks my code is having the opposite of the effect I want. This part is still a WIP for _, but I’m trying to come up with the strongest promise of backward compatibility that I can reasonably keep.
        • Does the library have functions in a “happy medium” of complexity? If they’re too simple, I’d just implement them myself instead of using the library; if they’re too complex, I’d be willing to take on a dedicated dependency for that feature rather than leave it to a utility library. This is fairly subjective; I’ve tried to strike the correct balance with _, but I’ll have to see how many users agree.
        1. 4

          maybe no one will use _ either

          first impression: this is a terrible name, and not just because it’s already in use by gettext

          1. 1

            Can you say more about why _ strikes you as a bad name? Does the spelled-out version I also used (“lowbar”) strike you the same way?

            Names are important things, and I’m curious about your views.

            1. 5

              “I used an underscore character to name my library. Many people will think it’s named after this other similar library that also uses an underscore for its name, but it’s actually not.” <- This makes no sense at all. I’ve never heard an underscore called a lowbar, and citing the HTML spec comes across as “see I’m technically correct” pedantry.

              1. 2

                That’s an entirely fair criticism of my post – so much so that I upvoted.

                (The bit about the HTML spec was intended as … well, not quite a joke, but making light of myself for needing to pick an obscure term for the character – I also hadn’t heard a ‘_’ called a “lowbar” before. Clearly that not-quite-a-joke didn’t land).

                What I was trying to say is that _ fits so well with Raku’s existing use of $_, @_, and %_ that I decided to go with the name anyway – even though I view the name collision with underscore.js and lodash.js as unfortunate and 100% accept that most people will think that _ named in homage to lowdash.

                1. 2

                  Yeah, I’m speaking without any specific knowledge of Raku. I just think that if the library does catch on, people will pronounce it as “underscore” whether you want them to or not. =)

                  The thing that the underscore character reminds me of is when you’re writing a function that accepts more arguments than it needs, and you’re using an underscore to communicate that it’s unused: https://softwareengineering.stackexchange.com/questions/139582/which-style-to-use-for-unused-return-parameters-in-a-python-function-call (the link mentions Python and I’ve never used Python, but it’s common in nearly every language I do use)

              2. 1

                Can you say more about why _ strikes you as a bad name? Does the spelled-out version I also used (“lowbar”) strike you the same way?

                I dislike this name because it’s difficult to search for and lowbar is a relatively obscure term. If the intent is that every Raku program includes the library, then you could call it Prelude. That’s what other languages such as Haskell call a library of functions that are implicitly included in every program.

                1. 2

                  On the other hand, lodash uses _ and is pretty well known in the JS land.

                  1. 1

                    there’s also underscore.js https://underscorejs.org/

          2. 1

            I sometimes use such libraries if I have no other choice, but it would be much better to split it into smaller, more focused libraries probably?

            1. 2

              But then we’re back to the left-pad dilemma!

              1. 1

                Not if you don’t rely on a mutable third party store for your production builds :)

                1. 3

                  But you’ve just pushed the workload somewhere else… now you have to separately monitor for and pull in manually any bug fixes and security patches that get made in the dependency. Vendoring (whether in the traditional sense, or merely by snapshotting some specific state of a remote source) is a valid approach, but no panacea.

            1. 5

              I’m all for left-pad-sized packages.

              • Small packages benefit from isolation and clear public interfaces. In a large framework two different features could interact with each other behind the scenes. OTOH two features from two different packages won’t have a hidden shared state.

              • Small dependencies are easier to code review. When a package does one thing, I can read it, check if it really does the thing. Difficulty of understanding all code in dependencies grows linearly with small packages, but superlinearly within large packages. Bigger packages that do more usually have more layers of abstraction internally, and more places for different features to interact with each other.

              1. 2

                I’m all for left-pad-sized packages

                Me too. What I’m not for is thousands of dependencies.

                In a lot of ways, the OP was my attempt to figure out how we can get more of the first without also making the second more likely.

              1. 10

                My quibble with all discussions I read about the dependency problem—this one included—is how big a “thing” is in the “do one thing, and do it well” mantra. It’s probably so highly variable and problem-dependent that you should, at best, take the Unix philosophy as a guiding principle and don’t get too attached to it.

                Here’s a pattern I’ve experienced seemingly countless times: break down a problem into smaller parts, put them together to get a solution, notice that it’s kind of hard to follow or slow. I then put it together into a “monolith” and it’s actually better in terms of comprehensability and performance. (The breaking down of the problem is quite a good exercise, though, for actually understanding the problem.)

                This might manifest itself as a solution S made up of some combination of libraries A, B, and C. But, it turns out, A, B, and C are not used anywhere else. Rewriting S to get rid of A, B, C means I don’t have to manage the connections between them and now S’s implementation is easier to both understand and build. And it performs better.

                What happens sometimes (and seemingly a lot to me…) is that S is smaller than the sum of its parts. So when the article says,

                This means that a library with only a few lines is much more likely to be correct – and thus can be said to better follow the Unix philosophy of doing just one thing.

                you have to be really careful interpreting that, because where does the boundary of a library begin and end?

                1. 17

                  I don’t find the whole “Unix philosophy” thing to be useful in any real sense. It’s either tautologically true or collapses into definitional pedantry.

                  1. 3

                    I think that’s why it’s called “Unix philosophy” and not “Unix dogma” or “Unix commandments”.

                    It’s also why I really don’t like the term “Best practices”. It’s usually stuff I’d recommend as well, but usually as something to keep in mind and not some rule to blindly follow and it’s just shocking sometimes how much nonsense comes out of that, because of people, often with good intentions blindly follow some “Best practice” they came across, sometimes in a really specific context, that doesn’t apply at all where it is used. Even worse, when a “Best practice” approach is believed to be used, but actually is misunderstood and pretty much the opposite is done.

                    One could say Best Practices are the best practice, unless they are not.

                    A classical example is “You never want to use this flag” unless you do, which is why it’s there.

                    1. 2

                      My quibble with all discussions I read about the dependency problem—this one included—is how big a “thing” is in the “do one thing, and do it well” mantra.

                      I agree. In fact, I said pretty much the same thing in the OP:

                      But what this ignores is that “one thing” is not well defined. Consider the output from the ls command.

                      I also agree that we should “take the Unix philosophy as a guiding principle [without getting] too attached to it”. But even as a guiding principle, it’s worth (imo) putting some thought into how to balance the do-one-thing principle with other design goals – or else it risks becoming a “guiding principle” that’s too fuzzy to actually provide guidance.

                    1. 14

                      The problem with left pad had nothing to do with number or size of dependencies.id had only to do with a practise of depending on an external, unarchived, mutable code store not under your control for production builds.

                      If you have a local copy of left-pad, everything works fine.

                      1. 13

                        The left-pad fiasco served to illuminate two problems endemic to modern popular programming culture (besides the issue that packages could be retracted, which was a technical issue and has been fixed):

                        • A mind-boggling amount of packages (eventually) depend on what’s essentially a very trivial feature that definitely would normally be better placed in the standard library, or defined in packages as a custom helper method. A library’s size doesn’t increase meaningfully by defining it themselves (and perhaps even inlining it).
                        • Nobody knew that they even had that dependency (transitively). i.e., people were relying on code without being aware. That’s not a good thing, however you slice it.
                        1. 5

                          I do not agree with either of these points.

                          No code is better off duplicated everywhere rather than being shared. Being in a “standard library” vs a “package” is splitting hairs.

                          My programs all depend on lots of things I’m not fully aware of. Common OS components, CA stores, firmwares, and yes the dependencies of my dependencies. Not having to put every piece of the computer into my head in order to get some work done is the while point of abstraction.

                          1. 5

                            No code is better off duplicated everywhere rather than being shared.

                            Code needs to be maintained. Code you rely on can be updated in incompatible ways, can break due to external factors, can have security problems etc. And it can fall unmaintained. If you rely on a piece of code you don’t understand and it breaks, you’ll still have to fix it. Overly generic libraries are often bloated and offer more than you need, and this can get in the way. You might be relying heavily on a fringe feature of the library which nobody else is using but which is important to you.

                            Besides, within a project you’re likely to be re-using the same code more than once anyway, so it’s not like there’s no sharing going on.

                            Being in a “standard library” vs a “package” is splitting hairs.

                            Fair enough, and in fact I would argue it’s often better to have things external rather than the standard library (see the comment in the OP about batteries leaking acid). But packaging up trivial features is frivolous and generates unnecessary churn (i.e. more stuff to download, more licenses and versions to keep track of etc). And ironically, if there are more libraries doing the same thing in an ecosystem, you end up not sharing the code, as different of your direct dependencies pull in various different packages to achieve the same purpose.

                            Not having to put every piece of the computer into my head in order to get some work done is the while point of abstraction.

                            Abstraction isn’t about never having to care about anything. Every component you add has a cost. Yes, abstractions allow you to momentarily pretend the underlying things aren’t there, but they are still there. The art and science of programming is about knowing when you need to look below the abstraction. And I would argue you should always keep the edges of your abstraction boundaries in your “peripheral vision”, so to speak.

                            My main gripe with overly theoretical education is that everything below the abstraction is typically swept completely under the rug. I’ve seen this so often with colleagues: “we don’t have to look into that because it’s a black box” and then you open up the black box to find a huge can of worms that was totally avoidable, but now you have a pile of code that relies on the specific API that “abstraction” offers with no way to switch to something else. And no, adding another abstraction (i.e, even more code) to hide the specific API is not the answer (though predictably that’s the first thing you’ll hear from the same people who got you into this mess)

                            Of course I’m not advocating to avoid all dependencies (you wouldn’t get anything done!), but adding another dependency shouldn’t be the go to solution for all your problems.

                        2. 2

                          And after this incident the specific problem was fixed: authors aren’t allowed to delete packages on a whim any more.

                          1. 2

                            Yeah, that’s what I was referring to when I said

                            the developer removed the package (in a way that couldn’t happen anymore for reasons not relevant here)

                            But the post could certainly have been more explicit on that point.

                        1. 9

                          Avoiding external dependencies when possible and reasonable doesn’t preclude modularity in your own code. “External dependencies for everything XOR spaghetti-code monolith” is a false choice.

                          1. 1

                            Yeah, 100% agree. I hope my article didn’t make it sound like I think that’s an XOR choice, because I don’t.

                            I think that those are two ends of a spectrum. Every project needs to decide where to fall on that spectrum, which could be at one extreme or the other but is more often somewhere in the middle.

                          1. 16

                            I’m kindof wondering if the right way to think about this is not so much an issue of the number or size of packages that are dependencies, but the number of maintainers who are dependencies. Ultimately, whether two independent functions are part of the same package or two different ones maintained by the same person is a fairly shallow question. The micropackage approach is bad mainly in that it makes maintainership harder to understand.

                            One thing I think both Elm and Go do right is that they don’t hide the maintainer’s name in the dependency; Go just does import by repository path, so you can tell by looking at your dependency list that e.g. all six of those packages are maintained by the same person. Elm denotes packages as user/repo; I’m not a fan of the fact that they tie their package manager to GitHub, but it at least doesn’t hide this.

                            Almost every other language package manager does this wrong; when you do e.g. pip install foo, there is no indication whatsoever about who that package is coming from.

                            With distro package managers like apt, it’s okay for these names to be unqualified since the whole repository is curated by the distro maintainers. But in the absence of curation maintainership should be explicit.

                            1. 3

                              With distro package managers like apt, it’s okay for these names to be unqualified since the whole repository is curated by the distro maintainers.

                              I would say this is a problem even for distro package managers, at least for “universe”-like repositories. It’s pretty common for a package to disappear from one version of Ubuntu / Debian to the next because the maintainer disappeared and no one else picked it up. That being said, I agree with you in general.

                              1. 3

                                One thing about Go, you can use any Git host, not just GitHub, and it even works with Mercurial, SVN, etc.

                                1. 2

                                  [maybe it’s] not so much an issue of the number or size of packages that are dependencies, but the number of maintainers who are dependencies.

                                  I really like this idea. And it seems like something that would be very easy to add to existing package managers (e.g., changing the final output from installed X packages in Y seconds to installed X packages from Y authors in Z seconds.

                                  But I have a question: do you think the relevant number is the number of organizations that are maintainers or the number of (natural) persons who are maintainers? Your comment seemed to treat these as always being the same, but they are often (very) different. I can see arguments for either, so I’m interested in which you meant.

                                  1. 1

                                    I think it makes sense to treat organizations as a single maintainer.

                                1. 2

                                  I can’t imagine what argument would convince me that the license of your artefact should in any way inform the choice of implementation language, but I’d love to hear anybody try.

                                  1. 6

                                    The basic idea is:

                                    • Most FOSS projects are created by one or a small number of people and not large teams
                                    • “Corporate” languages (Golang, Rust, Typescript) are backed by large organizations
                                    • Large organizations tend to have high turn over and large code bases that can’t fit into any one persons head, so they optimize for the following major priorities:
                                      • discouragement of “dialects” and encouragement of a consistent or standard way of expression
                                      • explicit and verbose code
                                      • “quick to pick up” languages whose focus is less on mastery than consistency of expression
                                    • One or few person teams tend to not have high turn over and work on projects without major funding which favors the priorities of:
                                      • languages that have high expressiveness and rewards mastery
                                      • communities that are “friendly”
                                      • languages that are “fun”

                                    To me, this is kind of talking about corporate vs. “DIY” (or “artisinal/amateur/journeyperson”) and it so happens that most FOSS projects are sole developers or a small number of people. As such, the FOSS projects will presumably favor languages that allow ‘shortcuts’, high levels of expressiveness, perhaps a DSL. Sole developers also won’t be as willing to put up with a toxic community.

                                    In a corporate context, consistency is much more important as there might be high turnover of developers, a large code base without any one or a few people knowing the whole code base. Consistency and standardization are favored as they want to be able to have programmers be as fungible as possible.

                                    You can see this in Golang, especially considering one of it’s explicit intended goals was to service Google’s needs. The fact that it can be used outside of that context is great but the goal of the language was in service to Google, just as Rust was in service to Mozilla and it’s browser development effort. The same could also be said of Java as it was marketed as “business friendly” language for basically the reasons listed above.

                                    The speaker goes on to talk about Raku, which I guess is what Perl6 turned into (?), as being one of the fun languages with a friendly community.

                                    So I think it’s a little reversed. It’s more like, most free software is written by a single or small number of people, and this workflow has a selection bias of a particular type of language, or a language that favors a particular set of features while discouraging others.

                                    1. 2

                                      Yikes, I wouldn’t touch perl with a barge pole :)

                                      I understand the idea behind small teams better being able to handle a codebase filled with dynamic magic and various other “spooky action at a distance”, but the problem isn’t just how much cognitive load you’re wasting getting up to speed, it’s the defects you build because so much is hidden and things you thought would be orthogonal end up touching at the edges.

                                      1. 4

                                        Raku really isn’t Perl. I don’t know enough Perl to have an opinion on it one way or the other (though its sub-millisecond startup time – about an order of magnitude faster than Python and about the same as Bash – give it a pretty clear niche in some sysadmin settings). But they are definitely different languages.

                                        The analogy I’d pick is that their relationship is a lot like the relationship between go and C. Go (Raku) was designed by people who were deeply familiar with C (Perl) and think that it got a lot right on a philosophical level. At the same time, the designers also wanted to solve certain (in their view) flaws with the other language and to create a language aimed at a somewhat different use case. The result is a language that shares the “spirit” of C (Perl), but that makes very different tradeoffs and thus is attractive to a different set of users and is not at all a replacement for the more established language.

                                        1. 1

                                          TBH I know nothing about Raku. I vaguely remember the next version of Perl being right around the corner for years, but by then I’d had enough of line noise masquerading as source code (which all Perl written to be “clever” definitely was at the time) so I wasn’t paying attention.

                                        2. 4

                                          There’s an argument that you might not introduce those defects if you’re a single person working on a project because the important bits stay in your head and you remember what all the implicit invariants are.

                                          I personally buy a weak form of that argument: things developed by individual developers do tend to have webs of plans and invariants in those individuals’ heads. AFAIK there’s some reasonable empirical research indicating that having software be modified by people other than the original authors comes with a higher rate of defects. (Lots of caveats on that: e.g. perhaps higher quality software that has its design and invariants documented well does not suffer from this.)

                                          I’m told that hardware / FPGA designers tend to be much more territorial about having other people touch their code than software people, because of the difficulty of re-understanding code after it has been edited by someone else, because hardware contains a greater density of tricky invariants than software.

                                          1. 4

                                            Yikes, I wouldn’t touch perl with a barge pole :)

                                            I think that’s unnecessarily mean…

                                            1. 4

                                              I hear what you’re saying and I think there’s a lot of validity to it but there’s a lot of subtlety and shades of gray that you’re glossing over with that argument.

                                              So here’s a weak attempt at a counter argument: all that dynamic magic or other trickery that might end up messing things up for beginner/intermediate programmers, or even programmers that just aren’t familiar with the trickery/context/optimizations, are not such a big deal for more experienced programmers, especially ones that would invest enough time to be the primary maintainer.

                                              It’s not that one method is inherently better than the other, it’s that the context of how the code is created selects for a particular style of development, depending on what resources you’re optimizing for. Large company with high turnover and paid employees: “boring” languages that don’t leave a lot of room for rock star programmers. Individual or small team: choose a language that gives space for an individual programmers knowledge to shine.

                                              I saw a talk (online) by Jonathan Blow on “How to program independent games” and, to me at least, I see a lot of similarities. The tactics used as a single developer vs. a developer in a team environment are different and sometimes go against orthodoxy.

                                              There’s no silver bullet and one method is not going to be the clear winner in all contexts, on an individual or corporate level, but, to me at least, the idea that development strategies change depending on the project context (corporate vs. individual) is enlightening and helps explain some of the friction I’ve encountered at some of the jobs I’ve worked at.

                                              1. 1

                                                Right, but if you’re the only person working on an OSS project you’re not doing it full time, and you’re also (probably) working on a wide variety of other code 9-5 to pay the rent, basically meaning whenever you’ve got some time for it, you’re always going to be coming back to your OSS code without having it all fresh in working memory.

                                              2. 3

                                                (disclaimer: video author)

                                                [The larger problem is] the defects you build because so much is hidden and things you thought would be orthogonal end up touching at the edges.

                                                I 100% agree with this; spooky action at a distance is bad, and increases cognitive load no matter what. However, I think a language can be both dynamic/high context and prevent hidden interaction between supposedly orthogonal points.

                                                Here’s one example: Raku lets you define custom operators. This can make reading code harder for a newcomer (you might not know what the symbols even mean), but is very expressive for someone spending more time in the codebase. Crucially, however, these new operator definitions are lexically scoped, so there’s no chance that someone will use them in a different module or run into issues of broken promises of orthogonality. And, generalizing a bit, Raku takes lexical scope very seriously, which helps prevent many of the sorts of hidden issues you’re discussing.

                                                (Some of this will also depend on your usecase/performance requirements. Consider upper-casing a string. In Raku, this is done like 'foo'.uc, which has the signature (available via introspection) of Str:D --> Str:D (that is, it takes a single definite string and returns a new definite string). For my usecase, this doesn’t have any spooky action. But the zig docs talk about this as an example of hidden control flow in a way that could have performance impacts for the type of code Zig is targeting).

                                                1. 1

                                                  Yikes, I wouldn’t touch perl with a barge pole :)

                                                  Arguably Perl isn’t Raku.

                                              3. 1

                                                Can you imagine Free Software being produced with MUMPS? Some languages are only viable in certain corporate environments.

                                              1. 9

                                                I believe the premise “free software is written by solo devs” is wrong. Free software is written by solo devs and driveby contributors. As driveby contributors do not have any knowledge of all the invariants the software has to maintain, it makes sense to use a programming language with a very strong ability to specify and check invariants.

                                                In conclusion, Ada+Spark, Rust and Coq are the ideal programming languages to write free software. But hold on, isn’t it a bit weird that I came to this conclusion and that these languages also happen to be among the ones I like/admire the most, just like the author likes Raku and came to the conclusion that Raku was the ideal programming language for writing Free Software? Maybe there’s no actual ideal programming language for writing free software and we’re just trying to rationalize our tastes? ;)

                                                1. 2

                                                  (Disclaimer: video author)

                                                  But hold on, isn’t it a bit weird that I came to this conclusion and that these languages also happen to be among the ones I like/admire the most, just like the author likes Raku and came to the conclusion that Raku was the ideal programming language for writing Free Software?

                                                  That’s an important criticism – motivated reasoning is certainly a real problem, and one that’s difficult to guard against.

                                                  If it helps any, I can tell you that I started writing Raku because I viewed it as a really good fit for solo projects; that is, I identified those advantages from the outside, before I was invested in the language. (After a period where I was mostly writing Rust, I wanted to add a dynamic language with a focus on developer productivity, especially for solo work; other top contenders included Racket, Common Lisp, Clojure, Dyalog APL, and Elixir.)

                                                  I believe the premise “free software is written by solo devs” is wrong. Free software is written by solo devs and driveby contributors.

                                                  I mean, I don’t disagree – I’m a frequent driveby contributor and, as I mentioned in the talk, I submitted a pull request in the process of writing the presentation. I submitted that PR because Impress.js is written in JavaScript, which I already know; if I’d been generating my presentation using Pandoc, I probably would not have sent in any PRs, since I don’t know Haskell. So it’s clearly true that a project can get more driveby contributions if it’s written in a well known language.

                                                  My point, however, is that this isn’t worth optimizing for. Again, look at Impress.js. It represents something of a best-case scenario for driveby contributors: it’s written in the most widely known programming language, it doesn’t use a framework or other toolset with that some programmers won’t know, and it’s extremely well-documented. It even has a plugin system, to make writing third-party code easier. But even with all those advantages, the two maintainers are responsible for > 50% of all commits. If we exclude commits that add three or fewer lines (mostly commits that don’t depend on the programming language, such as fixing typos in the docs or adding a link to another example project), then the maintainers are responsible for > 70% of the commits. I haven’t done the math to weight commits by size, but I imagine that would put the maintainers even further ahead. And, again, this is a best case scenario for driveby commits.

                                                  My claim is that open source projects would be better off prioritizing the 70 (80? 90?) percent of the code that comes from the project’s main developers over the 30/20/10 percent from driveby contributors.

                                                  1. 1

                                                    I think the conclusion was wrong too. The bus factor was informative though.

                                                    In conclusion, Ada+Spark, Rust and Coq are the ideal programming languages to write free software.

                                                    Why not ? If the software proves valuable, people will learn Ada to maintain it. If it proves to be only useful for you, you are still in a better place for having written it in Ada. Game programmers work with assembly to add extensions to their favorite games released in the 90s.

                                                    Unlike corporate drones you can optimize free software for personal happiness.

                                                    I think the best way to increase free software usage would be by providing an extension model like wordpress, rather than a dialecting programming language. Plugins can also be written in any language if you use REST APIs.

                                                  1. 1

                                                    I don’t understand the argument. The line of reasoning is that pointfree programming removes some of the burden of holding state in you head while you read the code, by only naming things that need attention. The things that are not named are defined close to where they are used, so it is not hard to figure out what they are?

                                                    But there is as much state as there was before applying the pointfree principles. The same operations are done, you still have to hold the same intermediate states of the data in your head. The choice seems to be: either give the things an easy handle to to grasp them (a name), or don’t and define them close enough that you can look it up quickly.

                                                    First, I think those two are not mutually exclusive. You can define things close to where they are used and still give them names. Second, when juggling different pieces of the puzzle in your head it can be very convenient to have that name, even if it is only used in the scope of a few lines of code. Instead of having a vague concept like “those strings that indicate a player, I think” something like “names” is much clearer to me.

                                                    As always, it depends on the situation and your mileage may vary, but I am not inclined to use more of these pointfree idioms after reading the article.

                                                    1. 2

                                                      I don’t understand the argument. The line of reasoning is that pointfree programming removes some of the burden of holding state in you head while you read the code, by only naming things that need attention. The things that are not named are defined close to where they are used, so it is not hard to figure out what they are?

                                                      But there is as much state as there was before applying the pointfree principles.

                                                      Here’s the main point that, I believe, you are missing: when something has a name, the meaning of that name can change; when it doesn’t, there is nothing to change (and therefore less state). For a shorter (hopefully clearer?) example, compare the following bits of pseudocode

                                                      names = "ALICE BOB CAROL"
                                                      sorted_names = names.sort()
                                                      lc_names = sorted_names.toLowerCase()
                                                      print(lc_names)
                                                      

                                                      versus

                                                      print( "ALICE BOB CAROL".sort().toLowerCase() )
                                                      

                                                      You are correct that “the same operations are done” in both cases. But my claim is that names and sorted_names represent state that doesn’t exist in the second version: those names could be rebound or (depending on the language) the values they point to could be changed.

                                                      Of course, in an example this short, it’s easy to see that nothing changed sorted_names between where it was defined and where it was used. But that brings me to my other response to your comment. You made the point that

                                                      You can define things close to where they are used and still give them names.

                                                      I agree, you can. But, once you’ve given something a name, you can also refer to it using that name anywhere else in the same scope. You could adopt a rule that you shouldn’t do so, but I’d much rather adopt a style that makes it impossible for me to create that type of issue than to rely on programmer discipline (in the same way I often want the protection of a type system).

                                                      1. 2

                                                        Ah, thanks for the explanation. I think I get the point now and I agree that the second version is much better to understand and safer.

                                                        However, I still think that there is some mental burden that is being swept under the carpet. Your second version can be understood in a glance by all programmers, except the most junior. That is, depending on the situation, not the case with larger examples:

                                                        "ALICE BOB CAROL"
                                                          .sort()
                                                          .toLowerCase()
                                                          .filterOnSecondLetter( ["l", "o"] )
                                                          .reverse()
                                                          .filterOnSecondLetter( ["c", "o"] )
                                                          .reverse()
                                                          ...
                                                          // fifteen more steps
                                                          ...
                                                          .groupByLength()
                                                          .order(-1)
                                                          .formatHTMLList()
                                                          .print()
                                                        

                                                        Remembering what it exactly is that we are doing without names can become very hard very quickly. You step through all the intermediates states in your head and if you flinch once (“wait, what was the first row again?”) then you must start all over from the top, because there are no hooks like a name where you can pick up the trail half way. So I’d say this only works for trivial examples.

                                                        There is, of course, another easy solution to this problem that achieves the same goals (isolation, clarity by only emphasizing what matters) for programmers of all skill levels and that is: create a new scope. Put it in a function and you’re done:

                                                        print( formatWithWeirdLogic( "ALICE BOB CAROL" ) )
                                                        
                                                    1. 2

                                                      Ok, after more diversions and tangents than I can possibly count, and having basically had to hold so much state in my head that I’ve had to page to disk multiple times, I’ve reached the end of the article, and what have I learned?

                                                      1. I am now considerably less confident of the point of pointsfree than I was at the start… which is impressive, because I was pretty confident there was a point.
                                                      2. I will fire anyone that suggests using Raku in production. Which is impressive, because I’m usually an advocate for polyglot.

                                                      And I’m really not being flippant or trying to troll here. I truly came in wanting to gain a new perspective on pointsfree and tacit programming… I leave thinking that the line from Spiderman should be “with great power comes the responsibility not to use it”.

                                                      1. 2

                                                        I’m sorry that you feel that way; it sounds like neither my coding style nor my writing style are a good fit for your tastes.

                                                        One point where I agree with you, however: I don’t think Raku is a great fit for polyglot programming, at least in the way I assume you mean it. Raku has good FFI and can easily call out to anything with a C ABI, so in that sense it’s good at polyglot. But if by “polyglot” you mean “an environment where different parts of the codebase are written in different languages but are understandable to the entire team (including people specializing in other languages)”, well, then Raku isn’t a great fit. As the 101 example shows, you can write Raku code that is easily understood by people without much Raku experience but, imo, that involves giving up most of what makes Raku powerful. At that point, you might as well just use Python/Ruby/JavaScript – and, since everyone already knows those, you can support a polyglot environment.

                                                        (As I guess is clear, I believe than the advantages of deep mastery of a set of idioms outweigh the benefits of a polyglot setup, but I’m happy to agree to disagree on that point)

                                                      1. 5

                                                        My first contact with tacit programming came from J and left amazed by the potential of it. Looking at others languages implementing it seems like kind of w/hacky. It goes far beyond using a pipe-ish operators. I remember discovering the concept of hooks & forks and never seen it in other languages. Generalized composition techniques and using being able to create a function by chaining/composing a train of functions are one of the aspect connected to tacit programming that I really like and may (or may not) enhance self-documented code, IMHO.

                                                        I wish to see more flawless integration of tacit programming concepts in the future. Haskell’s pointfree is a bit disappointing due to heavy use of the dot . everywhere.

                                                        1. 5

                                                          My first contact with tacit programming came from J and left amazed by the potential of it.

                                                          I had a similar first contact experience – except that it was Dyalog APL in my case, so the experience came with a lot more non-ASCII characters.

                                                          I remember discovering the concept of hooks & forks and never seen it in other languages.

                                                          I’m not sure that any non-array language will ever be able to quite match J/APL, etc. But Raku comes far closer than I ever thought I’d see. For example, the docs you link say:

                                                          3 hours and 15 minutes is 3.25 hours. A verb hr, such that (3 hr 15) is 3.25, can be written as a hook. We want x hr y to be x + (y%60) and so the hook is:

                                                             hr =: + (%&60)
                                                             3 hr 15
                                                          3.25
                                                          

                                                          You can do essentially the same thing in Raku, with only a few more characters:

                                                             my &hr = * + * / 60;
                                                             3 [&hr] 15
                                                          3.25
                                                          

                                                          (Admittedly, this isn’t idiomatic in Raku. In particular, I’m not sure I’ve ever seen an infix function call in real Raku code. But still!)

                                                          1. 2

                                                            I just began to look at Raku and will be learning a bit of it during the Christmas holidays, it really seems a versatile language and a funny one. I realized a long time ago that non-array language can’t provide the same level of expressiveness and compactness of J/APL for tacit programming.

                                                            What characterize idiomatic code in Raku when you have so many ways to do it?

                                                            And looking for your example, if I want to do something similar but with the average function. In J,

                                                            avg =: +/%#

                                                            Using the Whatever star seems to imply different positional arguments. Using pointy block,

                                                            my &avg = -> @a { @a.sum / @a.elems}

                                                            and simply using a block with the topical variable :

                                                            my &avg = {$_.sum / $_.elems}

                                                            Can we go further than that or take another approach to express it?

                                                            1. 1

                                                              Can we go further than that or take another approach to express it?

                                                              There’s also the placeholder parameter route via the ^ twigil:

                                                              my &avg = { @^a.sum / @^a.elems } 
                                                              
                                                        1. 4

                                                          This was really pleasant to read and was nice because it sounds like the author really just enjoys using Raku. Which is, perhaps, an underrated feature of our tools.

                                                          1. 2

                                                            This was really pleasant to read

                                                            Thanks!

                                                            it sounds like the author really just enjoys using Raku

                                                            I definitely do. One of Raku’s key design goals is to be “optimized for fun” (-Ofun). This isn’t because Raku was built by a bunch of programmers who want to enjoy using it (though it was!). More importantly, we believe that making a language -Ofun makes it better: software is fundamentally a craft, and making the tools of a craft more enjoyable to use naturally leads to an improvement in the finished product.

                                                          1. 3

                                                            This post is day 1 of the 2020 Raku Advent Calendar, so there will be another 24 daily posts on Raku-related topics between now and Christmas.

                                                            The article links to the Advent of Raku 2020 Git repo, which is collecting Raku solutions to Advent of Code. The day one solutions have already presented multiple different approaches – just as you might expect from Raku!

                                                            1. 29

                                                              The 6-week release cycle is a red herring. If Rust didn’t have 6-week cycles, it would have bigger annual releases instead, but that has no influence on the average progress of the language.

                                                              It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                                                              Rust could have had just 4 releases in its history: 2015 (1.0), 2016 (? for errors), 2018 (new modules) and 2020 (async/await), and you would call them reasonably sized, each with 1 major feature, and a bunch of minor standard library additions.

                                                              Async/await is one major idiom-changing feature since 2015 that actually caused churn (IMHO totally worth it). Apart from that there have been only a couple of syntax changes, and you can apply them automatically with cargo fix or rerast.

                                                              1. 17

                                                                It’s like slicing the same pizza in either 4 or 42 slices, but complaining “oh no, I can’t eat 42 slices of pizza!”

                                                                It’s like getting one slice of pizza every 15 minutes, while you’re trying to focus. I like pizza, but I don’t want to be interrupted with pizza 4 times. Being interrupted 42 times is worse.

                                                                Timing matters. Treadmills aren’t fun as a user.

                                                                1. 13

                                                                  Go releases more frequently than Rust, and I don’t see anyone complaining about that. Go had 121 releases, while Rust less than half of that.

                                                                  The difference is that Go calls some releases minor, so people don’t count them. Rust could do the same, because most Rust releases are very minor. If it had Go’s versioning scheme it’d be on something like v1.6.

                                                                  1. 20

                                                                    People aren’t complaining about the frequency of Go releases because Go doesn’t change major aspects of the language, well, ever. The most you have to reckon with is an addition to the standard library. And this is a virtue.

                                                                    1. 8

                                                                      So, what major aspects of the language changed since Rust 1.0, besides async and perhaps the introduction of the ? operator?

                                                                      1. 10

                                                                        The stability issues are more with the Rust ecosystem than the Rust language itself. People get pulled into fads and then burned when they pay the refactoring costs to move to the next one. Many of those fad crates are frameworks that impose severe workflow constraints.

                                                                        Go is generally far more coherent as an overall ecosystem. This was always the intent. Rust is not so opinionated and structured. This leads to benefits and issues. Lots of weird power plays where people write frameworks to run other people’s code that would just be blatantly unnecessary in Go. It’s unnecessary in Rust, too, but people are in a bit of daze due to the complexity flying around them and it’s sometimes not so clear that they can just rely on the standard library for a lot of things without pulling in a stack of 700 dependencies to write an echo server.

                                                                        1. 2

                                                                          Maybe in the server/web part of the ecosystem. I am mostly using Rust for NLP/ML/data massaging and the ecosystem has been very stable.

                                                                          I have also use Go for several years, but I didn’t notice much difference in the volatility.

                                                                          But I can imagine that it is different for networking/services, because the Go standard library has set strong standards there.

                                                                        2. 6

                                                                          Modules have changed a bit, but it was optional change and only required running cargo fix once.

                                                                          Way less disruptive than GOPATH -> go modules migration.

                                                                      2. 5

                                                                        That is kind of the point. I love both Go and Rust (if anything, I’d say I’d like Rust more than Go if working out borrow checker issues wasn’t such a painstaking, slow process), but with Go I can go and update the compiler knowing code I wrote two years ago will compile and no major libraries will start complaining. With Rust, not so much. Even in the very short time I was using it for a small project, I had to change half of my code to use async (and find a runtime for that, etc.) because a single library I wanted to use was ‘async or the highway’.

                                                                        Not a very friendly experience, which is a shame because the language itself rocks.

                                                                        1. 9

                                                                          In Rust you can upgrade the compiler and nothing will break. Rust team literally compiles all known Rust libraries before making a new release to ensure they don’t break stuff.

                                                                          The ecosystem is serious about adherence to semver, and the compiler can seamlessly mix new and old Rust code, so you can be selective of what you upgrade. My projects that were written for Rust 1.0.0 work fine with the latest compiler.

                                                                          The async addition was the only change which caused churn in the ecosystem, and Rust isn’t planning anything that big in the future.

                                                                          And Go isn’t flawless either. I can’t upgrade deps in my last Go project, because migration to Go Modules is causing me headaches.

                                                                          1. 3

                                                                            Ah, yeah, the migration to modules was a shit show. It took me about six months to be able to move a project to modules because a bunch of the dependencies took a while to upgrade.

                                                                            Don’t get me wrong, my post wasn’t a criticism of my Rust. As I said, I really enjoy the language. But any kind of big changes like async and so on introduce big paradigm shifts that make the experience extra hard for newcomers. To be fair to Rust, Python took 3 iterations or so until they figured out a proper interface for async, while rust figured the interface and left the implementation to the reader… Which has created another rift for some libraries.

                                                                    2. 4

                                                                      I can definitely agree with the author, since I do not write Rust in my day job it is pretty hard for me to keep up with all the minor changes in the language. Also, as already stated in the article, the 6 week release cycle exacerbates the problem.

                                                                      I’m not famliar with Rust’s situation, but from my own corporate experience, frequent releases can be awful because features are iterated on continuously. It would be really nice to just learn the final copy of something rather than all the intermediate steps to get there.

                                                                      1. 3

                                                                        Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                                        There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                                                        Besides, lots of changes in 6-week releases are tiny, like a new command-line flag, or allowing few more functions to be used as initializers of global variables.

                                                                        1. 6

                                                                          Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                                          Design-by-committee can be a lot more thoughtful than design-by-novice. I think this is one of the greatest misonceptions of agile.

                                                                          Many of the great things we take for granted are done by committee including our internet protocols and core infrastructure. There’s a lot of real good engineering in there. Of course there’s research projects and prototyping which are super useful but it’s a full time job to keep up with developments in research. Most people don’t have to care to learn it until it’s stable and published.

                                                                          1. 2

                                                                            Sorry, I shouldn’t have mentioned an emotionally-charged “commitee” name. It was not the point.

                                                                            The point is that language features need iteration to be good, but for a language with strong stability guarantee the first iteration must be the final one.

                                                                            So the way around such impossible iteration is release only obvious core parts, so that libraries can iterate on the rest. And the rest is going to be blessed as official only after it proves useful.

                                                                            Rust has experience here: the first API of Futures turned out to have flaws. Some interfaces caused unfixable inefficiencies. Built-in faillibility turned out to be more annoying than helpful. These things came out to light only after the design was “done” and people used it for real and built large projects around them. If Rust held that back and waited for the full async/await to be feature-complete, it’d be a worse design, and it wouldn’t have been released yet.

                                                                          2. 3

                                                                            Releasing “final copy” creates design-by-commitee.

                                                                            I’m not convinced that design-by-crowd is substantively different from design-by-committee.

                                                                            1. 1

                                                                              Releasing “final copy” creates design-by-commitee. Features have to get real-world use to prove they’re useful.

                                                                              There’s a chicken-egg problem here. Even though Rust has nightlies and betas, features are adopted and used in production only after they’re declared stable. But without using a feature for real, you can’t be sure it’s right.

                                                                              I deny the notion that features must be stabilized early so that they get wide spread or “production use.” It may well be the case that some features don’t receive enough testing on nightly/beta and in order to get more users it must hit stable, but limited testing on nightly or beta is not a reason to stabilize a feature. Either A) wait longer until it’s been more thoroughly tested on nightly/beta or B) find a manner to get more testers of features on nightly/beta.

                                                                              I’m not necessarily saying that’s what happened with Rust, per se, but it’s close as I’ve seen the sentiment expressed several times over my time with Rust (since 0.9 days).

                                                                          3. 10

                                                                            It’s not a red herring. There might be bigger annual releases if there weren’t 6-week releases, but you’re ignoring the main point: Rust changes frequently enough to make the 6-week release cycle meaningful. The author isn’t suggesting the same frequency of changes less often, but a lower frequency of changes - low enough, perhaps, that releasing every 6 weeks would see a few “releases” go by with no changes at all.

                                                                            No one is trying to make fewer slices out of the pizza. They’re asking for a smaller pizza.

                                                                            1. 7

                                                                              How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change? That’s the scale of changes for the majority of the 6-week releases. For all but handful of releases the changes are details that you can safely ignore.

                                                                              Rust is very diligent with documenting every tiny detail in release notes, so if you don’t pay attention and just gloss over them only counting the number of headings, you’re likely to get a wrong impression of what is actually happening.

                                                                              1. 3

                                                                                How is adding map_or() as a shorthand for map().unwrap_or() a meaningful language change?

                                                                                I think that’s @ddevault’s point: the pizza just got bigger but it didn’t really get better. It’s a minor thing that doesn’t really matter, but it happens often and it’s something you may need to keep track of when you’re working with other people.

                                                                                1. 9

                                                                                  Rust also gets criticised for having too small standard library that needs dependencies for most basic things. And when it finally adds these basic things, that’s bad too…

                                                                                  But the thing is — and it’s hard to explain to non-users of the language — that additions of things like map_or() is not burdensome at all. From inside, it’s usually received as “finally! What took you so long!?”.

                                                                                  • First, it follows a naming pattern already used elsewhere. It’s something you’d expect to exist already, not really a new thing. It’s more like a bugfix for “wtf? why is this missing?”.

                                                                                    Back-filling of outrageously missing features is still a common thing in Rust. 1.0 was an MVP rather than a finished language. For example, Rust waited 32 releases before add big-endian/little-endian swapping.

                                                                                  • There’s cargo clippy that will flag too unidiomatic code, so you don’t really need to keep track of it.

                                                                                  • It’s OK to totally ignore this. If your code worked without some new stdlib function, it’ll doesn’t have to care. And these changes are minor, so it’s not like you’ll need to read a book on a new method you notice. You’ll know what it does from it’s name, because Rust is still at the stage of adding baby things.

                                                                                  1. 7

                                                                                    In the Haskell world, there’s a piece of folklore called the Fairbairn Threshold, though we have very clean syntax for composing small combinators:

                                                                                    The Fairbairn threshold is the point at which the effort of looking up or keeping track of the definition is outweighed by the effort of rederiving it or inlining it.

                                                                                    The term was in much more common use several years ago.

                                                                                    Adding every variant on every operation to the Prelude is certainly possible given infinite time, but this of course imposes a sort of indexing overhead mentally.

                                                                                    The primary use of the Fairbairn threshold is as a litmus test to avoid giving names to trivial compositions, as there are a potentially explosive number of them. In particular any method whose definition isn’t much longer than its name (e.g. fooBar = foo . bar) falls below the threshold.

                                                                                    There are reasonable exceptions for especially common idioms, but it does provide a good rule of thumb.

                                                                                    The effect is to encourage simple combinators that can be used in multiple situations, while avoiding naming the explosive number of combinations of those combinators.

                                                                                    Given n combinators I can probably combine two of them in something like O(n^2) ways, so without the threshold as a rule of thumb you wind up with a much larger library, but no real greater utility and much higher cognitive overhead to track all the combinations.

                                                                                    Further, the existence of some combinations tends to drive you to look for other ever larger combinations rather than learn how to compose combinators or spot the more general usage patterns yourself, so from a POSIWID perspective, the threshold encourages better use of the functional programming style as well.

                                                                                2. 1

                                                                                  Agreed. It has substantially reduced my happiness all around:

                                                                                  • It’s tiring to deal with people who (sincerely) think adding features improves a language.
                                                                                  • It’s disappointing that some people act like having no deprecation policy is something that makes a language “stable”/“reliable”/good for business use.
                                                                                  • It’s mind-boggling to me that the potential cost of removing a feature is never factored into the cost of adding it in the first place.

                                                                                  Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                                                                  What I have done to keep my sanity is to …

                                                                                  • freeze the version of Rust I’m targeting to Rust 1.13 (I’m not using ?, but some dependencies need support for it), and
                                                                                  • playing with a different approach to language design that makes me happier than just watching the constant mess of more-features-are-better.
                                                                                  1. 2

                                                                                    Mainstream language design is basically living with a flatmate that is slowly succumbing to his hoarding tendencies and simply doesn’t realize it.

                                                                                    I like that analogy, but it omits something crucial: it equates “change” with “additional features/complexity” – but many of the changes to Rust are about removing special cases and reducing complexity.

                                                                                    For example, it used to be the case that, when implementing a method on an item, you could refer to the item with Self – but only if the item was a struct, not it it was an enum. Rust 1.37 eliminated that restriction, removing one thing for me to remember.

                                                                                    Other changes have made standard library APIs more consistent, again reducing complexity. For example the Option type has long had a map_or method that calls a function on the Some type or, if the Option contains None, uses a default value. However, until Rust 1.41, you had to remember that Results didn’t have a map_or method (even though they have nearly all the other Option methods). Now, Results have that method too, making the standard library more consistent and simpler.

                                                                                    I’m not claiming that every change has been a simplification; certainly some have not. (For example, did we really need todo!() as a shorter way to write unimplemented!() when they have exactly the same effect?).

                                                                                    But some changes have been simplifications. If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space. Which effect dominates? As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                                                    1. 3

                                                                                      But some changes have been simplifications.

                                                                                      I think what you are describing is a language that keeps filling some gaps and oversights, they are probably not the worst kind of additions, but they are additions.

                                                                                      If Rust is a flatmate that is slowly buying more stuff, it’s also a flatmate that’s throwing things out in an effort to maintain a tidy space.

                                                                                      What has Rust thrown out? I have trouble coming up with even a single example.

                                                                                      As a pretty heavy Rust user, my personal feeling is that the language is getting simpler over time, but I don’t have any hard evidence to back that up.

                                                                                      How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                                                      I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                                                      1. 1

                                                                                        How would you distinguish between the language getting simpler and you becoming more familiar with the language?

                                                                                        That’s a fair question, and is part of the reason I added the qualification that I can only provide my personal impression – without data, it’s entirely possible that I’m mistaking my own familiarity for language simplification. But I don’t believe that’s the case, for a few reasons.

                                                                                        I think this is the reason why many additions are “small, simple, obvious fixes” to expert users, but for new/occasional users they present a mountain of hundreds of additional things that have to be learned.

                                                                                        I’d like to focus on the “additional things” part of what you said, because I think it’s key: if a feature is revised so that it’s consistent with several other features, then that’s one fewer thing for a new user to learn, not one more. For example, match used to treat & a bit differently and require as_ref() method calls to get the same effect, which frequently confused people learning Rust. Now, & works the same with match as it does with the rest of the language. Similarly, the 2015 Edition module system required users to format their paths differently in use statements than elsewhere. Again, that confused new users (and annoyed pretty much everyone) and, again, it’s been replaced with a simpler, more consistent, and easier-to-learn system.

                                                                                        On the other hand, you might have a point about occasional Rust users – if a user understood the old module system, then switching to the 2018 Edition involves learning something new. For the occasional user, it doesn’t matter that the new system is simpler – it’s still one more thing for them to learn.

                                                                                        But for a new user, those simplifications really do make the language simpler to pick up. I firmly believe that the current edition of the Rust Book describes a language that is simpler and more approachable – and that has fewer special cases you have to “just remember” – than the version of the language described in the first edition.

                                                                                        1. 1

                                                                                          A lot of effort is spent “simplifying” things that “simply” shouldn’t have been added in the first place:

                                                                                          • do we really need two different kind of use paths (relative and absolute)?
                                                                                          • do we really need both if expressions and pattern matching?
                                                                                          • do we really need ? for control flow?
                                                                                          • do we really need to have two different ways of “invoking” things, (...) for methods (no support for named parameters) and {...} for structs (support for named parameters)?
                                                                                          • do we really need the ability to write foo for foo: foo in struct initializers?

                                                                                          Most often the answer is “no”, but we have it anyway because people keep conflating familiarity with simplicity.

                                                                                          1. 2

                                                                                            You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit. Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                                                            I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                                            So yes, we need these things, because convenience is also important.

                                                                                            1. 2

                                                                                              You’re describing redundancy as if it was some fault, but languages without any redundancy are a turing tarpit.

                                                                                              I’m very aware of the turing tarpit, and it simply doesn’t apply here. A lack of redundancy is not the problem – it’s the lack of structure.

                                                                                              Not only we don’t need two kinds of paths, the whole use statement is unnecessary. We don’t even need if. Smalltalk could live without it. We don’t really need anything more than a lambda and a Y combinator or one instruction.

                                                                                              Reductio ad absurdum? If you think it’s silly to question why we have both if-then-else and match, why not add ternary operators, too?

                                                                                              It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                                              Pattern matching on options is pretty much always wrong, regardless of the minimalism of design. I think the only reasons Rust users use it is because it makes the borrow checker happy more easily.

                                                                                              I’ve used Rust v0.5 before it had if let, before there was try!(). It required a full match on every single Option. It was a pure minimal design, and I can tell you it was awful.

                                                                                              In my experience, the difference in convenience between Rust 5 years ago (which I use for my own projects) and Rust nightly (which is used by some projects I contribute to) just isn’t there.

                                                                                              There is no real point in upgrading to a newer version – the only thing I get is a bigger language and I’m not really interested in that.

                                                                                3. 1

                                                                                  This discussion suffers from “Monday morning quarter backing” to an extent. We now (post fact) know which releases of Rust contained more churn than others. “churn” being defined as a change that either introduced a different (usually better IMO) way of doing something already possible in Rust, or a fundamental change that permeated the ecosystem either to due to being the new idiomatic way, or being the Next Big Thing and thus many crates in the ecosystem jumped in early. Either way, my code needs to change due to new warnings (and the ecosystem doesn’t care for warnings) or since many crates are open source I’ll inevitably get a PR to switch to the new hotness.

                                                                                  With that stated, my actual point is that Rust releases every 6 weeks. I don’t know if the next release (1.43 at the time of this writing) will contain something that produces churn or not without closely following upcoming releases. I don’t know if the release after that will contain big changes. So I’m left with either having to follow all releases (every 6 weeks), or closely follow upcoming releases. Either way I’m forced to stay in tune with Rust development. For many this is fine. However in my industry (Government) where dependencies must go through audit, etc, etc. It’s really hard to keep up with. If Rust had “major” (read churn inducing releases) every year, or say every 3 years (at new editions) that would be far, far easier to keep up with. Because then I don’t need to check every 6 weeks, I can check every year, or three years whatever it may be. Minor changes (stdlib additions, etc.) can still happen every 6 weeks, almost as Z releases (in semver X.Y.Z speak), but churn inducing changes (Y changes) happen on a set much slower schedule.

                                                                                  1. 2

                                                                                    When your deps updated to ?, you didn’t need to change anything. When your deps started using SIMD, you didn’t need to change anything. When your deps switched to Edition 2018, you didn’t need to change anything because of that.

                                                                                    Warnings from libraries are not displayed (cap-lints), so even if you use deprecated stuff, nobody will notice. You could sleep through years of Rust changes and not adopt any of them.

                                                                                    AFAIK async/await was the first and only language change after Rust 1.0 that massively changed interfaces between crates, causing a necessary ecosystem-wide churn. It was one change in 5 years.

                                                                                    Releases are backwards compatible, so you really don’t need to pay attention to them. You need to update the compiler to update dependencies, but this doesn’t mean you need to adopt any language changes yourself.

                                                                                    The pain of going through dependency churn is real. But apart from async, it’s not caused by compiler release cycle. Dependencies won’t stop changing just because the language doesn’t change. Look at JS for example: Node has slow releases with long LTS, the language settled down after ES2016, IE and Safari put breaks on language evolution speed. And yet, everything churns all the time! People invent new frameworks weekly on the same language version.

                                                                                1. 1

                                                                                  Have you considered using reltime()? Then you could just do

                                                                                  g:time_stamp_start = reltime()
                                                                                  
                                                                                  " ...
                                                                                  
                                                                                  function! TimeStamp()
                                                                                       let l:second_offset = reltimefloat(g:time_stamp_start, reltime())```
                                                                                  
                                                                                  1. 2

                                                                                    I actually didn’t know about reltime(), so thanks for pointing that out.

                                                                                    However, I think it would add complexity here, rather than remove it. It looks like it would give me better-than-second level precision, so I’d need some code to deal with that extra precision (in my case, just throwing it away) and I think that would more than make up for the savings in terms of computing the relative time.

                                                                                    Still glad I learned about reltime(), though—vim is full of surprises.

                                                                                  1. 1

                                                                                    Assigning to a:current_time is kind of a strange thing to do. The a: space is intended for function arguments. I’m actually surprised Vim allows writing to this at all! It doesn’t work when you’re actually using it as arguments:

                                                                                    :fun! Test(arg)
                                                                                    :  let a:arg = 'zzzz'
                                                                                    :endfun
                                                                                    
                                                                                    :call Test('zxc')
                                                                                    E46: Cannot change read-only variable "a:arg"
                                                                                    

                                                                                    In :help a:var it’s mentioned that “The a: scope and the variables in it cannot be changed, they are fixed”, so this actually sounds like a bug(?)

                                                                                    Using local scope l:current_time is much more standard (you don’t actually need a variable there, you can \<C-r>=strftime(..) directly).

                                                                                    You may also want to consider making g:time_stamp_enabled a buffer-local variable, instead of a global one. That way enabling the timestamp insertion will only work on the current buffer, and not that other Python buffer you have or whatnot.

                                                                                    1. 2

                                                                                      Assigning to a:current_time is kind of a strange thing to do. The a: space is intended for function arguments.

                                                                                      Thanks—I didn’t realize that (I’m definitely not a vimscript expert!). I’ll update the code accordingly.

                                                                                    1. 2

                                                                                      Has the author used Linux on their primary computer?

                                                                                      If you choose a fully fledged, GUI friendly distro, SO MUCH can go wrong. I have spent a long time battling with bloated distributions when stuff breaks. I realise that in the end, I don’t care about most of the system I am using, just a few core apps. If I can fit everything on my system in my head (I admit I don’t know most of what goes on in the kernel), I will be able to fix anything that goes wrong and keep it running smoothly.

                                                                                      Pick a minimal distro with a decent package manager (arch, gentoo, even ubuntu mini). The software used is usually something like:

                                                                                      xorg (maybe wayland soon?), i3, rxvt-unicode, vi, openssh, nvidia, git, mpd, darktable, firefox, networkmanager

                                                                                      A few config files and I’m good to go. So much bloat can be removed when you learn how to use your terminal emulator and a tiling window manager.

                                                                                      1. 1

                                                                                        Has the author used Linux on their primary computer?

                                                                                        I’m not sure if this was intended as a genuine question or as a disguised insult. Treating it as a genuine question, yes, I can assure you that the author has many years of experience running Linux as his daily driver.

                                                                                        Pick a minimal distro with a decent package manager (arch, gentoo, even ubuntu mini). The software used is usually something like: xorg (maybe wayland soon?), i3, rxvt-unicode, vi, openssh, nvidia, git, mpd, darktable, firefox, networkmanager

                                                                                        I agee with this (though I go with dwm and simple-terminal instead of i3 and rxvt-unicode).

                                                                                        That said, other people have different styles that work for them—and, despite how much l love living in the terminal—I’m not going to assume that anyone who prefers a different workflow doesn’t know how to use the terminal. And I’m especially not going to make that assumption when they’ve been using Linux far longer than I have.

                                                                                      1. 4

                                                                                        I don’t think ‘lightweight Linux’ is only about low resource use. Another reason people use ‘lightweight’ systems such as Slackware, Arch, or the BSDs because such systems follow a KISS approach where it is possible to understand and know the whole system (to some extend). Also, in general, such systems are easier to debug than more complex distributions.

                                                                                        At any rate, I don’t see why the author cares. It is nice that there are open UNIX systems that cater to different audiences.

                                                                                        1. 2

                                                                                          I actually made a similar point to the author in a previous discussion (https://fosstodon.org/@kev/100425413313343410) and he had a fairly nuanced reply.

                                                                                          He argued that “lightweight” isn’t the same as “minimal/simple”. Some distros might be both, but the two concepts are distinct (at least in his usage), and his claim was that people focus too much on a distro being lightweight—perhaps at the expense of focusing on how minimal it might be.

                                                                                          I don’t agree that this is common usage (as I said in the previous thread), but it does cast the article in a different light—the author wasn’t arguing against systems like Slackware or Arch that might be easier to debug; he was just arguing about focusing on resource usage as a key criterion for evaluating distros (as distinct from simplicity, which he would agree is an important factor).

                                                                                        1. 3

                                                                                          Although this site is built and maintained by Netlify, it seems to use publicly available/verifiable data and not have a particular bias toward generators that work well with Netlify’s software.

                                                                                          1. 2

                                                                                            It’s refreshing to see, isn’t it? I know from Netlify’s point of view it provides an avenue of contact with their target market; but in also being open source it is providing a good public service that is verifiable in its neutrality.