1. 43
    1. 20

      I’ve kinda done something like this over the years, only less purposefully. And I thought I’d settled on Nim, but I rage-quit it a few months ago after a particularly egregious “it’s my language and I can put in any footguns I want*” decree by the Benevolent Dictator-For-Life on the forum.

      Anyway for now I’ve ended up at Swift. (And yet I keep coding in C++ because Reasons. Sigh.)

      I wonder why the Swift version is so slow. Arithmetic is overflow-checked by default, and arrays bounds-checked, but the OP said they turned that off. Maybe using (ref-counted) classes where structs would have sufficed? Or passing arrays around in a way that defeats the COW optimizations?

      * BEGIN RANT There’s a newish Nim feature that lets you declare references as non-null, and uses control-flow analysis to prevent you from assigning/passing a null value, at compile time. Awesome! Only, the analysis was failing in lots of my code and complaining a variable might be null, when I had just checked it right above. After I made some reduced test cases and reported the problem, the BDFL told me (brusquely) that the control flow analysis ignores “return” statements. And that this is not a bug and will not be fixed … because you should be assigning to “result” instead of early-returning (praise Wirth!) This despite “return” being a perfectly cromulent part of the language that’s been around for ages. At this point I decided a single BDFL, esp. a cranky German one [I’m from a German family myself, I know the type] is probably a misfeature and I should look elsewhere. END RANT

      1. 9

        Do you have a link to the forum discussion?

        1. 9

          This must be what he means – but it comes up all the time. There must be a dozen github issues/RFCs/forum threads where it is referenced. Araq is so strongly opinionated about this one that it is surely a “backing” example to his line in the Zen of Nim: If it’s hard for the compiler to reason about then it’s hard for people to and so you should not do it.

          While I love Nim, I disagree on this line item. I think people can be bad at reasoning in situations where computers can be good, and this can matter to program notation - like closing parentheses in Lisp vs. indentation where humans are just bad at “base-1/unary” after about 3..4 things (hence the 5th diagonal slash after ||||, as another example). Even adamant Lispers will say “Let your editor indent/do that for you!” - a paraphrase of saying “a program is better at counting” - they just recommend a different program from the compiler. ISTM, early return is a similar case (but more in the technical weeds).

      2. 7

        That is really discouraging for a language I’ve had a lot of faith in. Thanks for sharing.

      3. 5

        I’m sorry to hear that you fell out of love with Nim. I always enjoyed hearing your perspective on the language.

      4. 2

        I wonder why the Swift version is so slow.

        If you want to tinker, this looks to be the Swift source. I reproduced the 1.5 hour running time estimate by commenting out line 364’s call to verbose printing function output_state_choice. Commit history suggests this was left out during the test. Despite some references to the number of cores in code, I found it used just one core, though I don’t know how the other implementations behaved. Memory grows steadily for the at least the first minute or two, so you could be onto something with copy-on-write behavior.

        1. 2

          For me, the Nim hits ~590% utilization (if 1core=100%). I boosted NUM_THREADS from 6 to 16 on a 16-core and that util didn’t change. So, making the work partitioning more fine-grained could maybe yield a 10x better time on a 64-core AMD 3990X – depending upon contention, of course. { Famous last words, I know! :-) }

      5. 2

        The beauty of open source is that Nim can be forked.

        1. 9

          Having and maintaining your own private language seems like a bad idea. And unless you have a LOT of free time and some very good ideas, trying to attract supporters away from an already-niche language seems like a bad idea, too.

          1. 3

            I disagree. If one or two central people that maintains an open source project are not easy to cooperate with, then it can be very fruitful over time if someone forks it.

            Also, forking a project does not necessarily mean that a single maintainer needs to do all the work. Receiving pull requests does not need to be that time consuming.

            In addition to this, some forks can be maintenence/“LTS” projects, they don’t have to keep the same pace of development to be useful. Sometimes a few selected patches going in can mean more to users than a continous stream of features.

      6. 2

        You’re welcome to D. The language is awesome. However, such “BDFL” have teratons of focused complaints to answer about so it’s not necessarily a good idea to escalate the problem on the internet instead of being patient and help fix it.

      7. 1

        You’re dropping the whole language because of one extremely niche feature 99.9% of developers would never stumble on? You know it’s not the language that looks bad in this story right?

        1. 5

          I don’t think you understand the feature he’s complaining about correctly, because it seems to me to be very common, as attested by @cblake’s comment that “it comes up all the time”.

          1. 5

            There’s cross-talk here. The specific A) strictNotNil feature supporting early return is (likely) a small niche, and B) early return/structure in general is much bigger (& what I meant by occurring regularly, e.g. here). Lest quick/casual readers be confused, early return/break are absolutely accepted {“cromulent” :-) } parts of Nim - not going away. Araq pushed back on hard work for A) he feels other PLs skip while being overwhelmed to get Nim2 in shape (& did not write a misleading doc in dispute according to git).

            @snej’s earlier comment (& that Forum thread) indicate he was ok with incompleteness & improving docs. Dropping Nim was more related to a “cranky single German BDFL” - a feature of a community, not a programming language. (I agree “completely and objectively wrong” was needlessly inflammatory rhetoric, but “put in footguns” is also.) Anyway, like @agent281 I am also sorry to see him leave!

            These disagreements are often about division of labor & perspective taking, not “objectivity” (esp. given the current state of human psychology as a science). To connect to my prior example, Lispers also complain about offside rules making macro writing “objectively” harder at a cost of “objectively less readable” syntax. Both compiler writers & language users understandably guard their time, driving much emotion.

            1. 3

              I honestly never went back to look at that thread after my last reply. I probably won’t.

              Maybe I’ll try Nim again sometime. I turned to Swift on the rebound and wrote a few thousand lines over the holidays (not my first waltz with that language) and largely enjoyed it except for the incredibly awkward APIs for working with unsafe constructs. (I wasn’t bridging to C, just marshaling data for a binary network protocol.) Which is to say I’m not totally happy with Swift. I’m still waiting for the perfect language.

        2. 1

          It was the straw that broke the camel’s back. And where did you get the idea that null checking or the “return” statement are extremely niche features? Ok, null checking isn’t on by default in Nim, but it’s basically table stakes in most new languages ever since Hoare’s “billion dollar mistake” went viral.

    2. 13

      Note that while the article only links to the Nim one, the implementations for all the various languages he tried can be seen via: https://github.com/mode80?tab=repositories&q=Yahtzee&type=&language=&sort=

    3. 12

      These sorts of comparative analyses are always useful and valuable contributions, even if they tell us more about the author than the languages under study. I was a little disappointed to find that this Yahtzee task isn’t listed on Rosetta Code or elsewhere; if we could see exactly how it’s specified, then we might be able to find a better-than-brute-force approach. That said, I do appreciate that the author shared their Nim version.

    4. 12

      I wonder how Zig & Fortran would have compared in this experiment. Yeah, Fortran is a thing, and it’s modern(ized) and fast. https://lobste.rs/t/fortran

      1. 6

        I’d be interested in seeing how Zig fares as well. Speed-wise I’d expect it to be the same as C (give or take a few seconds), but I wouldn’t expect the author to like it due to the need for manual memory management (even with defer, automatic memory leak checks, arena/fixed-size allocators, and other niceties).

    5. 12

      Excellent overview, but I will argue with the thesis of finding a perfect programming language. I used to spend a lot of time and effort looking for perfect programming languages, and never ever found one. Then I made a friend who is far smarter and a better programmer than I will ever be, and they told me “use the right tool for the right job”. It sounds like the job he wants to work with is exploring ML algorithms and playing around with data. He makes this pretty clear in the article but doesn’t say so up front.

      But while re-making Yahtzeebot in C, it kinda grew on me? It’s so fundamental and pure. There are only 32 keywords. You’re intimately aware at every step that everything is just bytes of data somewhere in memory. How to interpret that is up to you. And you must. It’s utterly painful to do “simple things”, but I learned to appreciate all the sugar in other languages.

      He got The Zen Of C! I’m honestly impressed.

    6. 5

      How about Ada/Spark?

      1. 2

        Would be a bit challenging given how the author wished Rust safety features to be optional.

    7. 5

      A fun read! Hope he does Go, Zig, Haskell, Crystal, OCaml and Kotlin next.

    8. 5

      The LongAssDictionaryWordFunctionNames().

      I really don’t understand this as an objection to a language / standard library. It was listed here for C#, but I’ve also seen it levelled at Objective-C. In both cases, these are languages where I find I can generally open a random function in the middle of a codebase I’m unfamiliar with and have a reasonable chance of understanding what’s going on. Short identifiers optimise for typing code. Long identifiers optimise for reading code. If typing is a significant bottleneck in your process then there are larger problems than identifier names. If writing happens more often than reading for a codebase (unless it’s single-use throw-away scripts) then you have bigger problems in your process.

      1. 2

        The “fun” part is when long-ass names collide with coding standards that mandate 80-character lines. :-P

        I normally keep my lines to 100 chars, but when writing in Swift I’m tempted to use 120. Maybe I’ll buy a bigger monitor.

      2. 2

        I think it can depend on the person. I genuinely have problems reading code with names like these. Idk why exactly, but with names like these, my eyes just stop parsing halfway through, and it can end up with me confusing variable and function names when I’m trying to work with code that uses this style. I don’t have this problem with reading in general, nor do I have trouble with English (it is my native language). I only noticed I’ve had problems reading this style over the last few years, when I’ve started having to work with code written in it.

        imo it can be a valid concern for readability, not just writability (where I agree with you — people shouldn’t artificially shorten variable names to type less/avoid autocomplete).

    9. 4

      7 languages and none of them functional :’(

    10. 3

      I wonder if we will see the next wave of dynamic languages soon. We got Rust, Swift, and Go in the last decade. I feel like there could be a great new Perl/Python/Ruby out there with some modern flavor.

      1. 2

        It took Rust almost ten years to gain its momentum, so chances are good that the next Perl/Python/Ruby already exists

      2. 2

        What “modern flavour” would you add to Python?

        1. 3

          The main reason to switch to a new language is to add new inabilities.

        2. 1

          I dunno, probably a lot of “historical baggage cleanup”. I’m reading through Crafting Interpreters so maybe I’ll just try to make it.

      3. 1

        I hope so! Julia is pretty great but there’s plenty more space out there to explore.

    11. 3

      But the big dealbreaker with Julia is it only compiles on the fly. That means you can’t just hand a compiled executable to someone. You must give them instructions to install Julia and all your dependencies first. That’s no good for anything you want to ship like a desktop app or a game.

      Can’t a language just be good at what it’s designed for without being a cure-all for every possible problem you could have? Can’t Julia just be good at being Julia?

      I’ve never used Julia, but it’s frustrating to read this; I don’t get why this has to be a framed as a deal-breaker. The post opened specifically with the use case of understanding machine learning algorithms; this seems like a perfect fit. How is the inability to ship a desktop app relevant?

      1. 1

        Even machine learning code has to be shipped at some point, no? Or do data scientist call it a day once they have it working in a Jupyter notebook? I’m half-kidding of course

        1. 2

          I’ve never worked in machine learning, but my entire professional career has consisted of writing programs that get deployed to a server where you assume the runtime has already been installed.

          I think it’s good to know some languages that don’t require this, but it has not been a deal-breaker by any means.

        2. 1

          Or do data scientist call it a day once they have it working in a Jupyter notebook? I’m half-kidding of course

          In some companies, yes.

    12. 1

      Is there a plain English explanation of this algorithm? I can’t figure it out from looking at the code, since there’s a lot of helper functions and caching logic

    13. -1

      It’s hard to feel happy grinding out a contrived LINQ expression for the same result as a “practically English” list comprehension in Python.

      OP admits to being an inexperienced programmer, and sentences like this one reinforce that.

      Experienced programmers need not read here.