Threads for washort

  1. 1

    If the industry has realized encapsulation has caused more hard than good, why did private fields come to ECMAScript and why is the standards body putting so much effort into the erognomics (getters, setters, now private fields)?

    1. 2

      IMO it’s the rot from developers’ general rejection of prototypal inheritance and the implementation of ES6 features. Javascript’s bastion of hope to hold on to marketshare is to give you the experience of writing like you do in other languages. JS is generally feature complete for 90% of use cases since ES5.

      1. 2

        I would agree–then TCO is proposed and major browsers didn’t implement it. The only thing I’ve been rooting for in recent times is the Records+Tuples proposal as it’s immutable and has fast comparison.

      2. 1

        I can’t provide documentary evidence for this but I bet this is more of MarkM’s work to make JS more suitable for safe interactions between mutually suspicious objects. Being able to tightly control which clients can invoke a method is useful for capability-security style access control.

      1. 1

        It’s weird that the first thing you criticize in a Critical Retrospective is something syntax-related that you yourself call superficial. It makes it hard to take the rest of the post seriously

        1. 29

          If syntax impacts understandability, is it actually superficial?

          1. 19

            Because I don’t think that’s the fault of the syntax. Huge part of criticism is expectations/preferences and lack of understanding of the trade-offs that made it the way it is. When Rust is different than whatever other language someone is used to, they compare familiar with unfamiliar (see Stroustrup’s Rule). But it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

            People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

            For example, <> for generics is an eyesore. But if Rust used [] for generics, it’d make array syntax either ambiguous (objectively a big problem) or seem pointlessly weird to anyone used to C-family languages. Whatever else you pick is either ambiguous, clashes with meaning in other languages, or isn’t available in all keyboard layouts.

            The closure syntax || expr may seem like line noise, but in practice it’s important for closures to be easy to write and make it easy to focus on their body. JS went from function { return expr } to () => expr. Double arrow closures aren’t objectively better, and JS users criticize them too. A real serious failure of Rust regarding closures is that they have lifetime elision rules surprisingly different than standalone functions, and that is a problem deeper than the syntax.

            Rust initially didn’t have the ? shortcut for if err != nil { return nil, err } pattern, and it had a problem of a low signal-to-noise ratio. Rust then tried removing boilerplate with a try!() macro, but it worked poorly with chains of fallible function calls (you’d have a line starting with try!(try!(try!(… and then have to figure out where each of them have the other paren). Syntax has lots of trade-offs, and even if the current one isn’t ideal in all aspects, it doesn’t mean alternatives would be better.

            And there are lots of things that Rust got right about the syntax. if doesn’t have a “goto fail” problem. Function definitions are greppable. Syntax of nested types is easy to follow, especially compared to C’s “spiral rule” types.

            1. 14

              I think a lot of criticism about syntax is oblique. People complain about “syntax” because it’s just… the most convenient way to express “I find it hard to learn how to write correct programs, and I find it hard to interpret written programs, even after substantial practice”.

              Lots of people complain that Common Lisp syntax is hard. Lisp syntax is so easy that you can write a parser in a few dozen lines. Common Lisp has a few extra things but, realistically, the syntax is absolutely trivial. But reading programs written in it is not, even after substantial practice, and I get that (as in, I like Common Lisp, and I have the practice, and I get that).

              Same thing here. A lot of thought went into Rust’s syntax, probably more than in, say, C’s syntax, if only because there was a lot more prior art for Rust to consider. So there’s probably not much that can be done to improve Rust’s syntax while not basically inventing another language. That doesn’t take away from the fact that the language is huge, so it has a syntax that’s unambiguous and efficient but also huge, so it’s just a whole lot of it to learn and keep in your head at once. I get it, I’ve been writing Rust on and off but pretty much weekly for more than an year now and I still regularly need to go back to the book when reading existing code. Hell, I still need it when reading existing code that I wrote. You pay a cognitive price for that.

              1. 3

                “I find it hard to learn how to write correct programs . . .

                Do you believe “correctness” is a boolean property of a program?

                1. 1

                  I do, as in, I think you can always procure a “correctness oracle” that will tell you if a program’s output is the correct one and which, given a log of the program’s operations, can even tell you if the method through which it achieved the result is the correct one (so it can distinguish between correct code and buggy code that happens to produce correct output). That oracle can be the person writing the program or, in commercial settings, a product manager or even a collective – a focus group, for example. However:

                  • That oracle works by decree. Not everyone may agree with its edicts, especially with user-facing software. IMHO that’s inherent to producing things according to man-made specs. There’s always an “objective” test to the correctness of physics simulation programs, for example, but the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.
                  • The oracle’s answer may not be immediately comprehensible, and they are not necessarily repeatable (like the Oracle in Delphi, it’s probably best to consider the fact that its answers do come from someone who’s high as a kite). IMHO that’s because not all the factors that determine a program’s correctness are inherent to the program’s source code, and presumably, some of them may even escape our quantitative grasp (e.g. “that warm fuzzy feeling” in games). Consequently, not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                  More to the point, I think it’s always possible to say if something is a bug or a feature, yes :-D.

                  1. 1

                    Wow! I guess I can just say that I wish I worked in your domain! 😉 I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part. Almost all of my programs have been approximate models of under-specified concepts that can change at the whims of their stakeholders. Or, as you say it,

                    the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.

                    Exactly!

                    not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                    In my experience it rarely exists anywhere! Not in one person, or many, or even conceptually.

                    1. 1

                      I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part.

                      Oh, don’t get me wrong – that describes most of the code I wrote, too, even some of the code for embedded systems :-D. It may well be the case that, for many programs, the “correct” way to do it currently escapes everyone (heh, web browsers, for example…) But I am content with a more restricted definition of correctness that embraces all this arbitrariness.

                2. 2

                  Well, there was a lot of prior art even when C was created, and they actively chose to disregard it. They also chose to disregard discoveries in C itself in the 70s and 80s, freezing the language far too early considering the impact it would have in the following decades.

                3. 4

                  it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

                  But like there is a reasonably objective language difficulty ranking index (from the perspective of English-native speakers) and Korean is squarely in the most-difficult tranche, I guess in no small part due to its complex symbology, at least in comparison to Roman alphabets. Are you saying that this dimension of complexity is, net, irrelevant?

                  1. 11

                    Korean uses the Hangul alphabet, which is very easy to learn. It’s much simpler than our alphabet. You can learn Hangul in a day or two. You’re thinking of Japanese, which is a nightmare based on people writing Chinese characters in cursive and italics while drinking a bottle of sake.

                    1. 1

                      some simplified hanzi does look like kanji, but I would appreciate an example of a japanese character looking like a cursive or italic version of a chinese glyph before I go on to tell your analogy to everyone at parties.

                      1. 1

                        It’s not an analogy. It’s the historical truth of kana: https://en.wikipedia.org/wiki/Kana. Japanese kanji and hanzi are mostly the same modulo some font changes and simplification in the 20th c.

                        1. 1

                          I meant the drunken japanese people part.

                          1. 3

                            We can’t prove they weren’t drunk. :-)

                    2. 11

                      from the perspective of English-native speakers

                      I think that’s what they were getting at; there’s nothing inherently difficult about it but your background as an English speaker makes it look hard to read when objectively speaking it’s dramatically simpler than English due to its regularity and internal logic.

                      1. 2

                        I guess I would say that there is no “objectively speaking” in this domain? Like, there is no superhuman who can look at things invariant of a language background.

                        1. 3

                          If you’re talking about “easy to learn” then I agree.

                          If you’re talking about simplicity, then I disagree. The number of rules, consistency, and prevalence of exceptions can be measured without reference to your background.

                      2. 7

                        I’ve specifically mentioned the Hangul alphabet (a syllabary, strictly speaking), not the language. The Korean language (vocabulary, grammar, spoken communication) may be hard to learn, but the alphabet itself is actually very simple and logical. It’s modern, and it has been specifically designed to be easy to learn and a good fit for the Korean language, rather than being a millennia-old historical borrowed mash-up like many other writing systems.

                        I think it’s a very fitting analogy to having an excellent simple syntax for a complex programming language. You may not understand the syntax/alphabet at all, but it doesn’t mean it’s bad. And the syntax/alphabet may be great, but the language it expresses may still be difficult to learn for other reasons.

                        With Rust I think people complaining about the syntax are shooting the messenger. For example, T: for<'a> Fn(&'a) makes lifetime subtyping contravariant for the loan in the argument of a function item trait in a generic trait bound. Is it really hard because of the syntax? No. Even when it’s expressed in plain English (with no computer language syntax at all) it’s an unintelligible techno-babble you wouldn’t know how to use unless you understand several language features it touches. That for<'a> syntax is obscure even by Rust’s standards, but syntactically it’s not hard. What’s hard is knowing when it needs to be used.

                      3. 4

                        People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

                        The problem with Rust’s syntax isn’t that they made this or that wrong choice for expressing certain features; it’s that there’s simply far too much of it. “Too many notes,” as Joseph II supposedly said.

                        1. 4

                          I agree with this, which is why I object to blaming the syntax for it. For a language that needs to express so many features, Rust’s syntax is doing well.

                          Rust chose to be a language that aims to have strong compile-time safety, low-level control, and nearly zero run-time overhead, while still having higher-level abstractions. Rust could drop a ton of features if it offered less control and/or moved checks to run-time or relaxed safety guarantees, but there are already plenty of languages that do that. Novelty of Rust is in not compromising in any of these, and this came at a cost of having lots of features to control all of these aspects.

                          1. 4

                            You can have many features without a lot of syntax. See Lisp.

                            1. 3

                              I think Lisp gets away here only on a technicality. It can still have plenty of obscure constructs to remember, like the CL’s (loop).

                              The example from the article isn’t really any simpler or more readable if you lispify it:

                              (try (map-result (static-call (def-lifetime a-heavy (:Trying :to-read a-heavy)) 
                                  syntax (lambda (like) (can_be this maddening))) ()))
                              

                              It could be made nicer if it was formatted in multiple lines, but so would the Rust example.

                              1. 2

                                If you pick the feature set for simplicity. Rust had other goals.

                                1. 4

                                  I literally just said that simple syntax doesn’t necessitate simple features.

                              2. 1

                                I don’t know. I strongly suspect that in the coming years, we will see new languages that offer the same safety guarantees as Rust, also with no runtime, but with syntax that is simpler than Rust. Lately I’ve seen both Vale and Koko exploring this space.

                          2. 12

                            The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it. I was only learning on my own time, and came to the realization I had a long way to go before I’d be able to pick apart a line like the author’s example.

                            So for me, it wasn’t just superficial.

                            1. 5

                              The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it.

                              Same.

                            2. 4

                              If syntax impacts understandability, is it actually superficial?

                              I’d say so.

                              The problem is that “this syntax is ugly” is a completely subjective judgement largely influenced by the peculiarities of ones’ own background. Coming from Perl and Ruby, I happen to find Rust pleasant to look at and easy to read, whereas I find both Python and Go (which many other people prefer) unreasonably frustrating to read and just generally odd-looking. It’s not that Python and Go are doing anything objectively less understandable, per-se, but they’re certainly have an unfamiliar look, and people react to unfamiliarity as if it were objectively incorrect rather than just, well, making unfamiliar choices with unfamiliar tradeoffs.

                              It’s pure personal preference, and framing ones’ personal preferences as something that has objective reality outside oneself and which some other party is doing “wrong” is, to me, the definition of a superficial complaint.

                              1. 8

                                It’s pure personal preference

                                Is it pure personal preference? I dunno. Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example. Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words. And so on.

                                1. 12

                                  Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example.

                                  Maybe, maybe not. What I do think is that if we’re going to try to make objective claims, we need some real objective measures and measurements. These conversations tend to be nothing but pseudoscience-y assertions and anecdata masquerading as irrefutable facts.

                                  Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words.

                                  (In no way am I trying to pick on you, but) Case in point: “Sigils are harder to grok than words” feels like a strong objective claim but… is this actually in any way true? 馬 is a much more complicated symbol than $ or @ or ->, but we have something like 1.5 billion people in the world happily reading and writing in languages that require a knowledge of thousands of such symbols to achieve literacy, and they turn out to somehow show lower rates of dyslexia than in alphabet based languages while doing so!

                                  Sigil-y writing systems are indeed actually quite common throughout history, so again we have this thing where what feels like a perfectly simple fact actually looks a heck of a lot like a simple case of familiarity when you scratch it just a little bit. The dominance of a few alphabetic writing systems outside of Asia could simply be a historical accident for all we know – there are no strong results from cognitive science supporting any claim that it’s objectively more fit to “human cognition”. We really don’t have any idea whether words are simpler or more efficient than symbols, or whether python is a global maxima of readability, a local minima, or anything in between. There are almost no good studies proving out any of this, just a lot of handwaving and poorly supported claims based on what people happen to like or be most familiar with.

                                  1. 2

                                    馬 is a word. It happens to be written as a single character, but that doesn’t make it punctuation.

                                    1. 2

                                      I’m aware. I speak Japanese.

                                      “Sigil” does not mean “punctuation”. It actually means something like “symbol with occult powers”, but in a programming context I think we can understand it as “symbol that conveys an important functional meaning”, like -> being the symbol meaning “returns a value of the following type”. The point being that OP was being pretty silly when they wrote that it’s a “rule of the human mind” that it’s easier to understand not written out as “not” rather than ! when the existence of a billion plus people using languages with things like “不” at least weakly implies that a single symbol for not is no more mentally taxing to understand.

                                      (that in many programming languages most sigils are punctuation is mostly just an artifact of what’s easy to type on a western keyboard, but it’s by no means the rule. See: APL, which can be chockfull of non-punctuation sigils)

                                      1. 1

                                        The point is that the symbol has a natural pronunciation, which makes it easy to read for a Japanese spsaker. In contrast, when I see !foo or &foo or $foo, my mind just makes an unintelligible noise followed by “foo”, so I have to concentrate on what the symbol means.

                                        1. 1

                                          But these symbols all do have actual pronunciations that are generally specified in the language or are established conventionally, eg) !foo is read “not foo”, &foo is “addressof foo” (at least in C) or “ref foo” in Rust, etc. Good learning resources almost always provide a reading when they introduce the symbol (Blandy et al’s Programming Rust is very good about this, for instance).

                                          Also fwiw not everyone “vocalizes” what they’re reading in their head, that’s actually not a universal thing.

                                    2. 1

                                      When I speak about “understandability” or whatever I’m not making a claim against an abstract Ur-human raised in a vacuum, I’m speaking about humans as they exist today, including cultural and historical influences, and measured on a demographic (macro) scale, rather than an individual (micro) scale. That is, I’m making a descriptive argument, not a normative one. In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                      1. 3

                                        That is, I’m making a descriptive argument, not a normative one.

                                        It’s not a very good descriptive argument, though, insofar as you’re really failing to describe a lot of things in order to make your argument fit the conclusion that “sigils are harder to grok than words”.

                                        Even if we confine ourselves to Western English speakers… what about mathematics? Why does almost everyone prefer y = x+1 to Cobol’s ADD 1 TO X GIVING Y? It’s more familiar, right? There doesn’t seem to be any long-term push to make Mathematics more wordy over time (most of the established symbols have hung around for hundreds of years and had ample opportunity to get out-competed by more grokkable approaches, if word-based approaches were found by people to be any more grokkable), so if we’re describing the long-term pressures on artificial languages I don’t think “sigils are harder to grok than words” is an accurate descriptive statement.

                                        In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                        Well, sure. But “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words” in any sense, and it leaves a lot more room to talk about sigils in programming languages in a rational way. Things like “dereferencing pointers” aren’t really familiar to anyone in words or sigils, so it’s not obvious to me that x = valueat y is any more or less “correct”/“intuitive”/“grokable” than x = *y.

                                        If anything, given the relative unpopularity of the Pascal/Ada & Cobol language families, a certain amount of “unfamiliar concepts compressed into sigils” seems to be appreciated by programmers at large. But other people disagree, which seems to point at this mostly being a superficial argument over tastes and perhaps how much maths background one has, rather than some kind of concrete and objective variation in any measurable metric of “understandability”.

                                        1. 2

                                          what about mathematics?

                                          Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                          “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words”

                                          Not some but most or even almost all, depending on just how many sigils we’re talking about.

                                          Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                          1. 2

                                            Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                            More people can read prose (in general) than the prose portions of an advanced Mathematics text (in specific). It’s not the orthography of mathematics that’s the limiting factor here.

                                            Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                            Which speaks to my point. Programming uses “sigils” because in many cases these sigils are already familiar to the audience, or are at least no less familiar to the audience for the concepts involved than anything else would be, and audiences seem to show some marked preference for sigils like { … } vs begin … end, y = x + 1 seems pretty definitely preferred for effective consumption by audiences over ADD 1 TO X GIVING Y, etc.

                                            At any rate, we seem to have wandered totally away from “sigils are objectively less readable” and fully into “it’s all about familiarity”, which was my original point.

                                            1. 2

                                              I’m not claiming that sigils are objectively less readable than prose. I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                              1. 1

                                                You’ve made claims that terse syntax impedes comprehension (“Sigils are harder to grok than words”), where the reality is in the “it depends” territory.

                                                For novices, mathematical notation is cryptic, so they understand prose better. But experts often prefer mathematical notation over prose, because its precision and terseness makes it easier for them to process and manipulate it. This is despite the fact that the notation is objectively terrible in some cases due to its ad-hoc evolution — even where the direction is right, we tend to get details wrong.

                                                Forms of “compression” for common concepts keep appearing everywhere in human communication (e.g. in spoken languages we have contractions & abbreviations, and keep inventing new words for things instead of describing them using whole phrases), so I don’t think it’s an easy case of “terse bad verbose good”, but a trade-off between unfamiliarity and efficiency of communication.

                                                1. 1

                                                  I agree with all of your claims here.

                                                2. 0

                                                  . I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                                  It’s not fully, but “the * operator should be spelled valueat/ {} should be spelled begin end” stuff is a superficial complaint unless and until we have objective, measurable reasons to favor one syntactical presentation over the other. Otherwise it’s just bikeshedding preferences.

                                                  But I’m sorry, let’s not continue this. I’m not buying the goalpost move here. You wrote that human cognition obeys “well-defined rules. Sigils are harder to grok than words”. That’s pretty obviously a claim that “sigils are objectively less readable than prose” due to these “well defined rules of cognition”. That’s the kind of handwavey, pseudoscience-as-fact discourse I was objecting to and pointing out these discussions are always full of.

                                                  I’ve pointed out that this is, in several ways, basically just a load of hot air inconsistent with any number of things true of humans in general (symbol based writing systems) and western readers in specific.

                                                  Now your “well-defined rules of human cognition which include that sigils are less readable than words” weren’t trying to be an objective claim about readability?

                                                  Sure. I’m done. Have a good one.

                                2. 24

                                  I would warmly suggest making an effort to hit Page Down twice to get past the syntax bit and read the rest of the post though, because it’s a pretty good and pragmatic take, based on the author’s experience writing and maintaining a kernel. Xous is a pretty cool microkernel which runs on actual hardware, it’s actually a pretty good test of Rust’s promises in terms of safety and security.

                                  1. 10

                                    It’s interesting but also has the weird dichotomy that the only two choices for systems programming are C or Rust. C++ also has a lot of the strengths that the author likes about Rust (easy to write rich generic data structures, for example), and has a bunch of other things that are useful in a kernel, such as support in the standard library for pluggable memory allocators, mechanisms for handling allocation failure, a stable standard library API, and so on.

                                    1. 5

                                      I had exactly the same thought. C++ burnt through a lot of good will in the C++98 era where it was admittedly a hot mess (and all the compilers where buggy dumpster fires). Now on one hand we have people who publicly and loudly swore off touching C++ ever again based on this experience (and even more people parroting the “C++ is a mess” statement without any experience) and on the other the excitement of Rust with all the hype making people invest a large amount of effort into learning it. But the result, as this article shows, is often not all roses. I believe oftentimes the result would have been better if people invested the same amount of time into learning modern C++. Oh well.

                                      1. 5

                                        Writing C++ is like writing Rust but with your whole program wrapped in unsafe{}. You have to manage your memory and hope you did it right.

                                        1. 4

                                          As I hope this article clearly demonstrates, there is a lot more to a language chice than memory safety. Also, FWIW, I write fairly large programs and I don’t find memory management particularly challenging in modern C++. At the same time, I highly doubt that these programs can be rewritten in Rust with the result having comparable performance, compilation times, and portability properties.

                                          1. 1

                                            What would hinder Rust from having comparable performance, compilation times, and portability properties, in your opinion?

                                            1. 1

                                              In summary:

                                              Performance: having to resort to dynamic memory allocations to satisfy borrow checker.

                                              Compilation: in Rust almost everything is a template (parameterized over lifetimes).

                                              Portability: C/C++ toolchain is available out of the box. I also always have an alternative compiler for each platform.

                                        2. 4

                                          string_view of temporaries makes dangling pointers instead of compilation errors. optional allows unchecked dereferencing without warnings, adding more UB to the modern C++. I haven’t met a C++ user who agrees these are fatal design errors. Sorry, but this is not up to safe Rust’s standards. From Rust perspective modern C++ continues to add footguns that Rust was designed to prevent.

                                          1. 1

                                            I haven’t met a C++ user who agrees these are fatal design errors.

                                            I haven’t used string_view much so can’t categorically say it’s not a design error (it very well may be). But for optional I can certainly say it is a trade-off: you have the choice of checked access (optional::value()) or unchecked and you decide what to use. I personally always use unchecked and never had any problems. Probably because I pay attention to what I am writing.

                                            1. 5

                                              This is the difference in approaches of the two languages. In C++ if the code is vulnerable, the blame is on the programmer. In Rust if the code is vulnerable, Rust considers it a failure of the language, and takes responsibility to stop even “bad” programmers from writing vulnerable code. I can’t stress enough how awesome it is that I can be a careless fool, and still write perfectly robust highly multi-threaded code that never crashes.

                                              In terms of capabilities, Rust’s Option is identical, but the the default behavior is safe, and there’s a lot of syntax sugar (match, if let, tons of helper methods) to make the safe usage the preferred option even for “lazy” programmers. The UB-causing version is written unsafe { o.unwrap_unchecked() }, which is deliberately verbose and clunky, so that the dangerous version stands out in code reviews, unlike subtle * or -> that are commonly used everywhere.

                                              Rust’s equivalent of string_view is &str, and it’s practically impossible to use the language without embracing it, and it’s borrow-checked, so it won’t compile if you misuse it.

                                        3. 2

                                          Eh, maybe the author just didn’t write that much low-level/kernel code in C++. I try not to read too much into these things. If I were to start learning F# tomorrow, then tried to write a similar piece two years from now, I’d probably end up with something that would have the weird dichotomy that the only two choices for functional programming are Scheme and F#.

                                          1. 2

                                            Scheme is honestly so hard to do functional in. It’s shockingly imperitive by nature given the reputation.

                                        4. 3

                                          I did read the entire post, but I wanted to voice that focusing on the wrong thing first makes people not take you seriously, especially when the author expresses it doesn’t matter, but they still decided to make it first?

                                          1. 3

                                            I may not be interpreting this correctly but I didn’t take the author qualifying it as a superficial complaint to mean that it doesn’t matter. Based on the issues he mentions regarding the readability of Rust macros, for example, I think it’s superficial as in “superficial velocity”, i.e. occurring or characterising something that occurs at the surface.

                                            (But note that I may be reading too much into it because reviewing and auditing Rust code that uses macros is really not fun so maybe I’m projecting here…)

                                        5. 20

                                          The final sentence of that section said, in summary, “Rust just has a steep learning curve in terms of syntax”. A critical retrospective that does not mention the horrendous syntax or its learning curve would lack credibility.

                                          1. 5

                                            I find Rust’s syntax perfectly clear and sensible. I am not the only one.

                                          2. 9

                                            I liked that it starts with that TBH. Rust’s dense syntax is probably the first impression of the language for many people—it was for me at least. And putting the author’s first impression first in the article makes it read more like a person telling a story, rather then a list of technical observations sorted by importance.

                                            I like to read stories by humans; i feel it easier to connect with the author and therefore to retain some of what they say. YMMV of course.

                                            1. 2

                                              And if they think rust is hard to read, wait until they discover lisp!

                                              (I know this author probably is already familiar with lisp and many other things, but the comparison stands.)

                                              1. 6

                                                I find it the other way around. If you temporarily put aside the issues of special forms and macros, the syntax of Lisp is extremely minimal and regular (it’s almost all lists and atoms). So Lisp stands at kind of an opposite extreme from Rust, with more familiar languages somewhere in between.

                                                1. 5

                                                  Nim still has a dumpLisp routine to show you the shape of an AST you may want to manipulate.

                                                  Syntax can be very personal, but I strongly prefer Nim’s to Rust’s and see no compelling language feature of Rust to tempt me away, though Nim is not without its own issues.

                                                  1. 2

                                                    Nim isn’t really comparable is it? More like Go with a GC etc?

                                                    1. 2

                                                      “How comparable” mostly depends upon what you mean by “a GC etc”. Nim’s (AutomaticRC/OptimizedRC) memory management seems fairly similar to Rust, but I am no Rust expert and most PLs have quite a few choices either directly or ecosystem-wide. (Even C has Boehm.) There is no “GC thread” like Java/Go. The ORC part is for cycle collection. You can statically specify {.acyclic.}, sink, lent, etc. in Nim to help run-time perf. Some links that go into more detail are: https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html https://nim-lang.org/blog/2020/12/08/introducing-orc.html

                                                      1. 0

                                                        “Go with a GC” is Go.

                                                        1. 1

                                                          Yes, that’s why I said it

                                                    2. 2

                                                      The complaint in the article is about noisy hard to read though, and lisp is definitely that, even if it is simple and regular that simplicity leads everything to look the same.

                                                      1. 3

                                                        I always wondered why indentation-based reader macros (SRFI-49 is a simple one) never became popular. I can see “whys” for big macro writer types since they often want to pick apart parse trees and this adds friction there. Most programmers are not that (in any language). My best guess is a kind of community dynamic where tastes of village elders make beginners adapt more. Or wishful promises/hopes for beginners to become elders? Or a bit of both/etc.?

                                                        Of course, part of the regularity is “prefix notation” which can remain a complaint.

                                                  2. 1

                                                    It makes it hard to take the rest of the post seriously

                                                    As x64k said the post is pretty well made I think and some honest criticism. If anything you can criticize the bad blog layout, which has big white bars on mobile and desktop, giving you a hard time reading it from any device.

                                                  1. 1

                                                    A previous effort in this direction was OneOnOneEmacs.

                                                    1. 6

                                                      A rationale for this project’s development.

                                                      1. 3

                                                        I can’t imagine that writing a pure Nix parser for .cabal files would be harder than writing a Nix backend for the PureScript compiler, but I guess that was more of an excuse to start a cool project than anything.

                                                        1. 3

                                                          Hi, Cabal dev here: I can imagine this very well. :)

                                                          1. 1

                                                            I had to go look at the package description documentation for Cabal after reading this, and after seeing the “Using explicit braces rather than indentation for layout” section I think I understand, lol

                                                      1. 3

                                                        The general form of Conway’s Law is that form of organization determines what software it can produce.

                                                        What form of organization will produce the user-empowering tools we desire?

                                                        1. 4

                                                          This is very interesting. I’m currently building an ocaps version of the OCaml standard library (as part of moving everything to use effects - https://github.com/ocaml-multicore/eio). I’ll see what techniques I can copy from this!

                                                          1. 2

                                                            Are you familiar with this previous effort to adopt ocap ideas to OCaml? https://www.hpl.hp.com/techreports/2006/HPL-2006-116.html

                                                            It used a source code verifier to limit programs to an ocap-safe subset.

                                                            1. 1

                                                              Yes; that paper is linked from the README ;-)

                                                              I’ve never actually used Emily though. I got the impression from the paper that it was very much a prototype.

                                                            2. 1

                                                              Oooh, very cool that you are using effects and handlers for this! This is something I think they’d be really great at, and would love to see more languages take advantage of! See also: Why Is Crochet Object Oriented?

                                                              One question about how effects work in Multicore OCaml: I don’t seem to see the effects appearing in the type. Eg. in lib_eunix/eunix.mli we have the following type given for alloc:

                                                              val alloc : unit -> Uring.Region.chunk
                                                              

                                                              But in lib_eunix/eunix.ml there is this implementation:

                                                              effect Alloc : Uring.Region.chunk
                                                              let alloc () = perform Alloc
                                                              

                                                              Shouldn’t Alloc appear somewhere in the type of alloc, or is that not tracked by Multicore OCaml?

                                                              1. 2

                                                                Yes, eventually. There’s a separate effort to get effects into the type system (see https://www.janestreet.com/tech-talks/effective-programming/). But that isn’t ready yet, so we’re using a version of the compiler with only the runtime effects for now.

                                                                1. 1

                                                                  Yeah, that’s the video I was thinking of! And yeah this explains why I’ve been confused reading the Multicore OCaml examples, so thanks for filling in the gaps in my understanding! Cool that the work can be decoupled at least (the runtime and library stuff alone seems like massive effort).