Threads for idrougge

  1. 5

    This seems like it’s probably misleading, because it’s measuring the overhead of the high-level data structure and not accounting for how the size in memory will vary according to what you put in there. If you only ever store (Python) int and short strings containing code points from the latin-1 range, it’ll look very different than if you store, say, longer strings containing code points outside latin-1 (will at least double the storage for the string) or more complex types like lists or dicts.

    The string storage varies because internally Python always uses a fixed-width encoding, but chooses it per-string as the narrowest one capable of encoding the widest code point in the string — 1, 2, or 4 bytes.

    1. 2

      The string storage varies because internally Python always uses a fixed-width encoding, but chooses it per-string as the narrowest one capable of encoding the widest code point in the string — 1, 2, or 4 bytes.

      I thought that python used utf-8 internally. TIL, I guess.

      https://peps.python.org/pep-0393/

      1. 2

        As the PEP says, Python used to do either 2-byte (with surrogates) or 4-byte internal Unicode and which one was a compile-time flag baked into the interpreter. Then Python 3.3 switched to the dynamic per-string encoding choice used ever since.

        The advantages of this are that strings are always fixed-width storage — which is convenient for a lot of interactions with them, both at the Python and underlying C level — but without the inefficiency of UTF-32. There are cases where this makes better use of memory than “UTF-8 always” (since Python can get the first 256 code points in 1-byte encoding while UTF-8 can only get the first 128), and cases where it doesn’t (since a single code point past 256 switches the whole string to a wider encoding).

        1. 3

          What’s an example of an algorithm where fixed width code points is actually useful?

          1. 1

            Iterating is a lot easier with fixed width. Indexing is a lot easier with fixed width. Calculating length is a lot easier with fixed width.

            I know a lot of people really like to say that you shouldn’t be allowed to do those things to Unicode, but there are enough real-world use cases which require them that there’s no reason not to have them be that little bit nicer.

            Also it avoids the issue of how to expose variable-width storage to the programmer; in the old days before PEP 393, a “narrow” (2-byte Unicode) build of Python would leak lone surrogates up to the programmer. But if you’re going to normalize them to avoid that you have a complex API design problem of how to achieve that. Currently Python strings are iterables of code points, not of bytes or code units, which is a cleaner abstraction all around.

        2. 1

          Does any language use utf-8 internally? That seems like a very slow choice if you are doing a lot of string manipulation.

          1.  

            Swift does for most purposes. If that is too slow, you are probably not dealing with proper strings.

            1.  

              It doesn’t seem like it’s actually storing anything as UTF-8 in memory, but rather a collection of Character objects, which to me seems to be a subclass of an integer by reading the documentation. That is pretty much the OOP way of implementing strings as far as I can tell.

              1.  

                Storage is UTF-8 according to https://www.swift.org/blog/utf8-string/

                A Character is a Unicode grapheme cluster and is realised on a much higher level than pure storage.

        3. 2

          Also, I don’t know if it’s still the case, but I know that it at least used to be true that Python dict aggressively resizes to keep the hash table sparse, so even if you use the lowest-memory-overhead container you can find, putting even a single dict inside that container is likely to wipe out all your careful micro-optimizing choices.

        1. 1

          It’s weird that the first thing you criticize in a Critical Retrospective is something syntax-related that you yourself call superficial. It makes it hard to take the rest of the post seriously

          1. 29

            If syntax impacts understandability, is it actually superficial?

            1. 19

              Because I don’t think that’s the fault of the syntax. Huge part of criticism is expectations/preferences and lack of understanding of the trade-offs that made it the way it is. When Rust is different than whatever other language someone is used to, they compare familiar with unfamiliar (see Stroustrup’s Rule). But it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

              People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

              For example, <> for generics is an eyesore. But if Rust used [] for generics, it’d make array syntax either ambiguous (objectively a big problem) or seem pointlessly weird to anyone used to C-family languages. Whatever else you pick is either ambiguous, clashes with meaning in other languages, or isn’t available in all keyboard layouts.

              The closure syntax || expr may seem like line noise, but in practice it’s important for closures to be easy to write and make it easy to focus on their body. JS went from function { return expr } to () => expr. Double arrow closures aren’t objectively better, and JS users criticize them too. A real serious failure of Rust regarding closures is that they have lifetime elision rules surprisingly different than standalone functions, and that is a problem deeper than the syntax.

              Rust initially didn’t have the ? shortcut for if err != nil { return nil, err } pattern, and it had a problem of a low signal-to-noise ratio. Rust then tried removing boilerplate with a try!() macro, but it worked poorly with chains of fallible function calls (you’d have a line starting with try!(try!(try!(… and then have to figure out where each of them have the other paren). Syntax has lots of trade-offs, and even if the current one isn’t ideal in all aspects, it doesn’t mean alternatives would be better.

              And there are lots of things that Rust got right about the syntax. if doesn’t have a “goto fail” problem. Function definitions are greppable. Syntax of nested types is easy to follow, especially compared to C’s “spiral rule” types.

              1. 14

                I think a lot of criticism about syntax is oblique. People complain about “syntax” because it’s just… the most convenient way to express “I find it hard to learn how to write correct programs, and I find it hard to interpret written programs, even after substantial practice”.

                Lots of people complain that Common Lisp syntax is hard. Lisp syntax is so easy that you can write a parser in a few dozen lines. Common Lisp has a few extra things but, realistically, the syntax is absolutely trivial. But reading programs written in it is not, even after substantial practice, and I get that (as in, I like Common Lisp, and I have the practice, and I get that).

                Same thing here. A lot of thought went into Rust’s syntax, probably more than in, say, C’s syntax, if only because there was a lot more prior art for Rust to consider. So there’s probably not much that can be done to improve Rust’s syntax while not basically inventing another language. That doesn’t take away from the fact that the language is huge, so it has a syntax that’s unambiguous and efficient but also huge, so it’s just a whole lot of it to learn and keep in your head at once. I get it, I’ve been writing Rust on and off but pretty much weekly for more than an year now and I still regularly need to go back to the book when reading existing code. Hell, I still need it when reading existing code that I wrote. You pay a cognitive price for that.

                1. 3

                  “I find it hard to learn how to write correct programs . . .

                  Do you believe “correctness” is a boolean property of a program?

                  1. 1

                    I do, as in, I think you can always procure a “correctness oracle” that will tell you if a program’s output is the correct one and which, given a log of the program’s operations, can even tell you if the method through which it achieved the result is the correct one (so it can distinguish between correct code and buggy code that happens to produce correct output). That oracle can be the person writing the program or, in commercial settings, a product manager or even a collective – a focus group, for example. However:

                    • That oracle works by decree. Not everyone may agree with its edicts, especially with user-facing software. IMHO that’s inherent to producing things according to man-made specs. There’s always an “objective” test to the correctness of physics simulation programs, for example, but the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.
                    • The oracle’s answer may not be immediately comprehensible, and they are not necessarily repeatable (like the Oracle in Delphi, it’s probably best to consider the fact that its answers do come from someone who’s high as a kite). IMHO that’s because not all the factors that determine a program’s correctness are inherent to the program’s source code, and presumably, some of them may even escape our quantitative grasp (e.g. “that warm fuzzy feeling” in games). Consequently, not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                    More to the point, I think it’s always possible to say if something is a bug or a feature, yes :-D.

                    1. 1

                      Wow! I guess I can just say that I wish I worked in your domain! 😉 I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part. Almost all of my programs have been approximate models of under-specified concepts that can change at the whims of their stakeholders. Or, as you say it,

                      the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.

                      Exactly!

                      not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                      In my experience it rarely exists anywhere! Not in one person, or many, or even conceptually.

                      1. 1

                        I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part.

                        Oh, don’t get me wrong – that describes most of the code I wrote, too, even some of the code for embedded systems :-D. It may well be the case that, for many programs, the “correct” way to do it currently escapes everyone (heh, web browsers, for example…) But I am content with a more restricted definition of correctness that embraces all this arbitrariness.

                  2. 2

                    Well, there was a lot of prior art even when C was created, and they actively chose to disregard it. They also chose to disregard discoveries in C itself in the 70s and 80s, freezing the language far too early considering the impact it would have in the following decades.

                  3. 4

                    it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

                    But like there is a reasonably objective language difficulty ranking index (from the perspective of English-native speakers) and Korean is squarely in the most-difficult tranche, I guess in no small part due to its complex symbology, at least in comparison to Roman alphabets. Are you saying that this dimension of complexity is, net, irrelevant?

                    1. 11

                      Korean uses the Hangul alphabet, which is very easy to learn. It’s much simpler than our alphabet. You can learn Hangul in a day or two. You’re thinking of Japanese, which is a nightmare based on people writing Chinese characters in cursive and italics while drinking a bottle of sake.

                      1. 1

                        some simplified hanzi does look like kanji, but I would appreciate an example of a japanese character looking like a cursive or italic version of a chinese glyph before I go on to tell your analogy to everyone at parties.

                        1. 1

                          It’s not an analogy. It’s the historical truth of kana: https://en.wikipedia.org/wiki/Kana. Japanese kanji and hanzi are mostly the same modulo some font changes and simplification in the 20th c.

                          1. 1

                            I meant the drunken japanese people part.

                            1. 3

                              We can’t prove they weren’t drunk. :-)

                      2. 11

                        from the perspective of English-native speakers

                        I think that’s what they were getting at; there’s nothing inherently difficult about it but your background as an English speaker makes it look hard to read when objectively speaking it’s dramatically simpler than English due to its regularity and internal logic.

                        1. 2

                          I guess I would say that there is no “objectively speaking” in this domain? Like, there is no superhuman who can look at things invariant of a language background.

                          1. 3

                            If you’re talking about “easy to learn” then I agree.

                            If you’re talking about simplicity, then I disagree. The number of rules, consistency, and prevalence of exceptions can be measured without reference to your background.

                        2. 7

                          I’ve specifically mentioned the Hangul alphabet (a syllabary, strictly speaking), not the language. The Korean language (vocabulary, grammar, spoken communication) may be hard to learn, but the alphabet itself is actually very simple and logical. It’s modern, and it has been specifically designed to be easy to learn and a good fit for the Korean language, rather than being a millennia-old historical borrowed mash-up like many other writing systems.

                          I think it’s a very fitting analogy to having an excellent simple syntax for a complex programming language. You may not understand the syntax/alphabet at all, but it doesn’t mean it’s bad. And the syntax/alphabet may be great, but the language it expresses may still be difficult to learn for other reasons.

                          With Rust I think people complaining about the syntax are shooting the messenger. For example, T: for<'a> Fn(&'a) makes lifetime subtyping contravariant for the loan in the argument of a function item trait in a generic trait bound. Is it really hard because of the syntax? No. Even when it’s expressed in plain English (with no computer language syntax at all) it’s an unintelligible techno-babble you wouldn’t know how to use unless you understand several language features it touches. That for<'a> syntax is obscure even by Rust’s standards, but syntactically it’s not hard. What’s hard is knowing when it needs to be used.

                        3. 4

                          People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

                          The problem with Rust’s syntax isn’t that they made this or that wrong choice for expressing certain features; it’s that there’s simply far too much of it. “Too many notes,” as Joseph II supposedly said.

                          1. 3

                            I agree with this, which is why I object to blaming the syntax for it. For a language that needs to express so many features, Rust’s syntax is doing well.

                            Rust chose to be a language that aims to have strong compile-time safety, low-level control, and nearly zero run-time overhead, while still having higher-level abstractions. Rust could drop a ton of features if it offered less control and/or moved checks to run-time or relaxed safety guarantees, but there are already plenty of languages that do that. Novelty of Rust is in not compromising in any of these, and this came at a cost of having lots of features to control all of these aspects.

                            1. 4

                              You can have many features without a lot of syntax. See Lisp.

                              1. 2

                                If you pick the feature set for simplicity. Rust had other goals.

                                1. 4

                                  I literally just said that simple syntax doesn’t necessitate simple features.

                                2. 2

                                  I think Lisp gets away here only on a technicality. It can still have plenty of obscure constructs to remember, like the CL’s (loop).

                                  The example from the article isn’t really any simpler or more readable if you lispify it:

                                  (try (map-result (static-call (def-lifetime a-heavy (:Trying :to-read a-heavy)) 
                                      syntax (lambda (like) (can_be this maddening))) ()))
                                  

                                  It could be made nicer if it was formatted in multiple lines, but so would the Rust example.

                                3.  

                                  I don’t know. I strongly suspect that in the coming years, we will see new languages that offer the same safety guarantees as Rust, also with no runtime, but with syntax that is simpler than Rust. Lately I’ve seen both Vale and Koko exploring this space.

                            2. 12

                              The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it. I was only learning on my own time, and came to the realization I had a long way to go before I’d be able to pick apart a line like the author’s example.

                              So for me, it wasn’t just superficial.

                              1. 5

                                The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it.

                                Same.

                              2. 3

                                If syntax impacts understandability, is it actually superficial?

                                I’d say so.

                                The problem is that “this syntax is ugly” is a completely subjective judgement largely influenced by the peculiarities of ones’ own background. Coming from Perl and Ruby, I happen to find Rust pleasant to look at and easy to read, whereas I find both Python and Go (which many other people prefer) unreasonably frustrating to read and just generally odd-looking. It’s not that Python and Go are doing anything objectively less understandable, per-se, but they’re certainly have an unfamiliar look, and people react to unfamiliarity as if it were objectively incorrect rather than just, well, making unfamiliar choices with unfamiliar tradeoffs.

                                It’s pure personal preference, and framing ones’ personal preferences as something that has objective reality outside oneself and which some other party is doing “wrong” is, to me, the definition of a superficial complaint.

                                1. 8

                                  It’s pure personal preference

                                  Is it pure personal preference? I dunno. Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example. Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words. And so on.

                                  1. 12

                                    Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example.

                                    Maybe, maybe not. What I do think is that if we’re going to try to make objective claims, we need some real objective measures and measurements. These conversations tend to be nothing but pseudoscience-y assertions and anecdata masquerading as irrefutable facts.

                                    Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words.

                                    (In no way am I trying to pick on you, but) Case in point: “Sigils are harder to grok than words” feels like a strong objective claim but… is this actually in any way true? 馬 is a much more complicated symbol than $ or @ or ->, but we have something like 1.5 billion people in the world happily reading and writing in languages that require a knowledge of thousands of such symbols to achieve literacy, and they turn out to somehow show lower rates of dyslexia than in alphabet based languages while doing so!

                                    Sigil-y writing systems are indeed actually quite common throughout history, so again we have this thing where what feels like a perfectly simple fact actually looks a heck of a lot like a simple case of familiarity when you scratch it just a little bit. The dominance of a few alphabetic writing systems outside of Asia could simply be a historical accident for all we know – there are no strong results from cognitive science supporting any claim that it’s objectively more fit to “human cognition”. We really don’t have any idea whether words are simpler or more efficient than symbols, or whether python is a global maxima of readability, a local minima, or anything in between. There are almost no good studies proving out any of this, just a lot of handwaving and poorly supported claims based on what people happen to like or be most familiar with.

                                    1. 2

                                      馬 is a word. It happens to be written as a single character, but that doesn’t make it punctuation.

                                      1. 2

                                        I’m aware. I speak Japanese.

                                        “Sigil” does not mean “punctuation”. It actually means something like “symbol with occult powers”, but in a programming context I think we can understand it as “symbol that conveys an important functional meaning”, like -> being the symbol meaning “returns a value of the following type”. The point being that OP was being pretty silly when they wrote that it’s a “rule of the human mind” that it’s easier to understand not written out as “not” rather than ! when the existence of a billion plus people using languages with things like “不” at least weakly implies that a single symbol for not is no more mentally taxing to understand.

                                        (that in many programming languages most sigils are punctuation is mostly just an artifact of what’s easy to type on a western keyboard, but it’s by no means the rule. See: APL, which can be chockfull of non-punctuation sigils)

                                        1. 1

                                          The point is that the symbol has a natural pronunciation, which makes it easy to read for a Japanese spsaker. In contrast, when I see !foo or &foo or $foo, my mind just makes an unintelligible noise followed by “foo”, so I have to concentrate on what the symbol means.

                                          1.  

                                            But these symbols all do have actual pronunciations that are generally specified in the language or are established conventionally, eg) !foo is read “not foo”, &foo is “addressof foo” (at least in C) or “ref foo” in Rust, etc. Good learning resources almost always provide a reading when they introduce the symbol (Blandy et al’s Programming Rust is very good about this, for instance).

                                            Also fwiw not everyone “vocalizes” what they’re reading in their head, that’s actually not a universal thing.

                                      2. 1

                                        When I speak about “understandability” or whatever I’m not making a claim against an abstract Ur-human raised in a vacuum, I’m speaking about humans as they exist today, including cultural and historical influences, and measured on a demographic (macro) scale, rather than an individual (micro) scale. That is, I’m making a descriptive argument, not a normative one. In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                        1. 3

                                          That is, I’m making a descriptive argument, not a normative one.

                                          It’s not a very good descriptive argument, though, insofar as you’re really failing to describe a lot of things in order to make your argument fit the conclusion that “sigils are harder to grok than words”.

                                          Even if we confine ourselves to Western English speakers… what about mathematics? Why does almost everyone prefer y = x+1 to Cobol’s ADD 1 TO X GIVING Y? It’s more familiar, right? There doesn’t seem to be any long-term push to make Mathematics more wordy over time (most of the established symbols have hung around for hundreds of years and had ample opportunity to get out-competed by more grokkable approaches, if word-based approaches were found by people to be any more grokkable), so if we’re describing the long-term pressures on artificial languages I don’t think “sigils are harder to grok than words” is an accurate descriptive statement.

                                          In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                          Well, sure. But “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words” in any sense, and it leaves a lot more room to talk about sigils in programming languages in a rational way. Things like “dereferencing pointers” aren’t really familiar to anyone in words or sigils, so it’s not obvious to me that x = valueat y is any more or less “correct”/“intuitive”/“grokable” than x = *y.

                                          If anything, given the relative unpopularity of the Pascal/Ada & Cobol language families, a certain amount of “unfamiliar concepts compressed into sigils” seems to be appreciated by programmers at large. But other people disagree, which seems to point at this mostly being a superficial argument over tastes and perhaps how much maths background one has, rather than some kind of concrete and objective variation in any measurable metric of “understandability”.

                                          1. 2

                                            what about mathematics?

                                            Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                            “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words”

                                            Not some but most or even almost all, depending on just how many sigils we’re talking about.

                                            Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                            1. 2

                                              Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                              More people can read prose (in general) than the prose portions of an advanced Mathematics text (in specific). It’s not the orthography of mathematics that’s the limiting factor here.

                                              Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                              Which speaks to my point. Programming uses “sigils” because in many cases these sigils are already familiar to the audience, or are at least no less familiar to the audience for the concepts involved than anything else would be, and audiences seem to show some marked preference for sigils like { … } vs begin … end, y = x + 1 seems pretty definitely preferred for effective consumption by audiences over ADD 1 TO X GIVING Y, etc.

                                              At any rate, we seem to have wandered totally away from “sigils are objectively less readable” and fully into “it’s all about familiarity”, which was my original point.

                                              1. 2

                                                I’m not claiming that sigils are objectively less readable than prose. I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                                1.  

                                                  You’ve made claims that terse syntax impedes comprehension (“Sigils are harder to grok than words”), where the reality is in the “it depends” territory.

                                                  For novices, mathematical notation is cryptic, so they understand prose better. But experts often prefer mathematical notation over prose, because its precision and terseness makes it easier for them to process and manipulate it. This is despite the fact that the notation is objectively terrible in some cases due to its ad-hoc evolution — even where the direction is right, we tend to get details wrong.

                                                  Forms of “compression” for common concepts keep appearing everywhere in human communication (e.g. in spoken languages we have contractions & abbreviations, and keep inventing new words for things instead of describing them using whole phrases), so I don’t think it’s an easy case of “terse bad verbose good”, but a trade-off between unfamiliarity and efficiency of communication.

                                                  1.  

                                                    I agree with all of your claims here.

                                                  2. 0

                                                    . I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                                    It’s not fully, but “the * operator should be spelled valueat/ {} should be spelled begin end” stuff is a superficial complaint unless and until we have objective, measurable reasons to favor one syntactical presentation over the other. Otherwise it’s just bikeshedding preferences.

                                                    But I’m sorry, let’s not continue this. I’m not buying the goalpost move here. You wrote that human cognition obeys “well-defined rules. Sigils are harder to grok than words”. That’s pretty obviously a claim that “sigils are objectively less readable than prose” due to these “well defined rules of cognition”. That’s the kind of handwavey, pseudoscience-as-fact discourse I was objecting to and pointing out these discussions are always full of.

                                                    I’ve pointed out that this is, in several ways, basically just a load of hot air inconsistent with any number of things true of humans in general (symbol based writing systems) and western readers in specific.

                                                    Now your “well-defined rules of human cognition which include that sigils are less readable than words” weren’t trying to be an objective claim about readability?

                                                    Sure. I’m done. Have a good one.

                                  2. 24

                                    I would warmly suggest making an effort to hit Page Down twice to get past the syntax bit and read the rest of the post though, because it’s a pretty good and pragmatic take, based on the author’s experience writing and maintaining a kernel. Xous is a pretty cool microkernel which runs on actual hardware, it’s actually a pretty good test of Rust’s promises in terms of safety and security.

                                    1. 10

                                      It’s interesting but also has the weird dichotomy that the only two choices for systems programming are C or Rust. C++ also has a lot of the strengths that the author likes about Rust (easy to write rich generic data structures, for example), and has a bunch of other things that are useful in a kernel, such as support in the standard library for pluggable memory allocators, mechanisms for handling allocation failure, a stable standard library API, and so on.

                                      1. 5

                                        I had exactly the same thought. C++ burnt through a lot of good will in the C++98 era where it was admittedly a hot mess (and all the compilers where buggy dumpster fires). Now on one hand we have people who publicly and loudly swore off touching C++ ever again based on this experience (and even more people parroting the “C++ is a mess” statement without any experience) and on the other the excitement of Rust with all the hype making people invest a large amount of effort into learning it. But the result, as this article shows, is often not all roses. I believe oftentimes the result would have been better if people invested the same amount of time into learning modern C++. Oh well.

                                        1. 5

                                          Writing C++ is like writing Rust but with your whole program wrapped in unsafe{}. You have to manage your memory and hope you did it right.

                                          1. 4

                                            As I hope this article clearly demonstrates, there is a lot more to a language chice than memory safety. Also, FWIW, I write fairly large programs and I don’t find memory management particularly challenging in modern C++. At the same time, I highly doubt that these programs can be rewritten in Rust with the result having comparable performance, compilation times, and portability properties.

                                            1. 1

                                              What would hinder Rust from having comparable performance, compilation times, and portability properties, in your opinion?

                                              1.  

                                                In summary:

                                                Performance: having to resort to dynamic memory allocations to satisfy borrow checker.

                                                Compilation: in Rust almost everything is a template (parameterized over lifetimes).

                                                Portability: C/C++ toolchain is available out of the box. I also always have an alternative compiler for each platform.

                                          2. 4

                                            string_view of temporaries makes dangling pointers instead of compilation errors. optional allows unchecked dereferencing without warnings, adding more UB to the modern C++. I haven’t met a C++ user who agrees these are fatal design errors. Sorry, but this is not up to safe Rust’s standards. From Rust perspective modern C++ continues to add footguns that Rust was designed to prevent.

                                            1. 1

                                              I haven’t met a C++ user who agrees these are fatal design errors.

                                              I haven’t used string_view much so can’t categorically say it’s not a design error (it very well may be). But for optional I can certainly say it is a trade-off: you have the choice of checked access (optional::value()) or unchecked and you decide what to use. I personally always use unchecked and never had any problems. Probably because I pay attention to what I am writing.

                                              1. 5

                                                This is the difference in approaches of the two languages. In C++ if the code is vulnerable, the blame is on the programmer. In Rust if the code is vulnerable, Rust considers it a failure of the language, and takes responsibility to stop even “bad” programmers from writing vulnerable code. I can’t stress enough how awesome it is that I can be a careless fool, and still write perfectly robust highly multi-threaded code that never crashes.

                                                In terms of capabilities, Rust’s Option is identical, but the the default behavior is safe, and there’s a lot of syntax sugar (match, if let, tons of helper methods) to make the safe usage the preferred option even for “lazy” programmers. The UB-causing version is written unsafe { o.unwrap_unchecked() }, which is deliberately verbose and clunky, so that the dangerous version stands out in code reviews, unlike subtle * or -> that are commonly used everywhere.

                                                Rust’s equivalent of string_view is &str, and it’s practically impossible to use the language without embracing it, and it’s borrow-checked, so it won’t compile if you misuse it.

                                          3. 2

                                            Eh, maybe the author just didn’t write that much low-level/kernel code in C++. I try not to read too much into these things. If I were to start learning F# tomorrow, then tried to write a similar piece two years from now, I’d probably end up with something that would have the weird dichotomy that the only two choices for functional programming are Scheme and F#.

                                            1. 1

                                              Scheme is honestly so hard to do functional in. It’s shockingly imperitive by nature given the reputation.

                                          4. 3

                                            I did read the entire post, but I wanted to voice that focusing on the wrong thing first makes people not take you seriously, especially when the author expresses it doesn’t matter, but they still decided to make it first?

                                            1. 3

                                              I may not be interpreting this correctly but I didn’t take the author qualifying it as a superficial complaint to mean that it doesn’t matter. Based on the issues he mentions regarding the readability of Rust macros, for example, I think it’s superficial as in “superficial velocity”, i.e. occurring or characterising something that occurs at the surface.

                                              (But note that I may be reading too much into it because reviewing and auditing Rust code that uses macros is really not fun so maybe I’m projecting here…)

                                          5. 20

                                            The final sentence of that section said, in summary, “Rust just has a steep learning curve in terms of syntax”. A critical retrospective that does not mention the horrendous syntax or its learning curve would lack credibility.

                                            1. 4

                                              I find Rust’s syntax perfectly clear and sensible. I am not the only one.

                                            2. 9

                                              I liked that it starts with that TBH. Rust’s dense syntax is probably the first impression of the language for many people—it was for me at least. And putting the author’s first impression first in the article makes it read more like a person telling a story, rather then a list of technical observations sorted by importance.

                                              I like to read stories by humans; i feel it easier to connect with the author and therefore to retain some of what they say. YMMV of course.

                                              1. 2

                                                And if they think rust is hard to read, wait until they discover lisp!

                                                (I know this author probably is already familiar with lisp and many other things, but the comparison stands.)

                                                1. 6

                                                  I find it the other way around. If you temporarily put aside the issues of special forms and macros, the syntax of Lisp is extremely minimal and regular (it’s almost all lists and atoms). So Lisp stands at kind of an opposite extreme from Rust, with more familiar languages somewhere in between.

                                                  1. 5

                                                    Nim still has a dumpLisp routine to show you the shape of an AST you may want to manipulate.

                                                    Syntax can be very personal, but I strongly prefer Nim’s to Rust’s and see no compelling language feature of Rust to tempt me away, though Nim is not without its own issues.

                                                    1. 2

                                                      Nim isn’t really comparable is it? More like Go with a GC etc?

                                                      1. 2

                                                        “How comparable” mostly depends upon what you mean by “a GC etc”. Nim’s (AutomaticRC/OptimizedRC) memory management seems fairly similar to Rust, but I am no Rust expert and most PLs have quite a few choices either directly or ecosystem-wide. (Even C has Boehm.) There is no “GC thread” like Java/Go. The ORC part is for cycle collection. You can statically specify {.acyclic.}, sink, lent, etc. in Nim to help run-time perf. Some links that go into more detail are: https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html https://nim-lang.org/blog/2020/12/08/introducing-orc.html

                                                        1. 0

                                                          “Go with a GC” is Go.

                                                          1. 1

                                                            Yes, that’s why I said it

                                                      2. 2

                                                        The complaint in the article is about noisy hard to read though, and lisp is definitely that, even if it is simple and regular that simplicity leads everything to look the same.

                                                        1. 3

                                                          I always wondered why indentation-based reader macros (SRFI-49 is a simple one) never became popular. I can see “whys” for big macro writer types since they often want to pick apart parse trees and this adds friction there. Most programmers are not that (in any language). My best guess is a kind of community dynamic where tastes of village elders make beginners adapt more. Or wishful promises/hopes for beginners to become elders? Or a bit of both/etc.?

                                                          Of course, part of the regularity is “prefix notation” which can remain a complaint.

                                                    2. 1

                                                      It makes it hard to take the rest of the post seriously

                                                      As x64k said the post is pretty well made I think and some honest criticism. If anything you can criticize the bad blog layout, which has big white bars on mobile and desktop, giving you a hard time reading it from any device.

                                                    1. -1

                                                      Jakt: immutable but object-oriented

                                                      2 steps forward and 1 step back.

                                                      1. 15

                                                        I don’t think anyone has successfully developed a large scale UI system in anything but OO. SerenityOS leans heavily on OO widgets for the UI, so I don’t think they have much choice.

                                                        1. 1

                                                          In what way?

                                                        1. 4

                                                          This article was written so that focus can be brought to the accessibility of the Linux desktop. As Raspberry Pi computers become more prevalent in schools, I want blind students to be able to enjoy learning to code, manage systems, and explore computing.

                                                          It saddens me that the children born in this century, whether sighted or not, must grow up with the notion that computers, in some intrinsic way, must be coded, managed and explored as a microscopic version of a 1970s minicomputer.

                                                          This was not how I expected computers to look when I grew up.

                                                          1. 2

                                                            What would you prefer? Why are the ways that a Raspberry Pi is like a small 70s minicomputer more significant than the ways in which it is not?

                                                          1. 24

                                                            Yeah yeah, mention Rust. Rust is too complicated to implement by one person.

                                                            I’m not sure that’s a practical metric by which to judge a tool. The C compilers that provide a practical foundation for modern software development were not implemented by one person either.

                                                            In general Turing completeness is necessary but not sufficient: it’s just one facet of what makes a language practically useful. There are many other properties that end up resulting in costs someone has to pay to use a language; e.g., is it memory safe, or will engineers and users alike be on the hook for an unknown number of egregious memory safety bugs?

                                                            1. 12

                                                              Also mrustc has been implemented mostly by one person.

                                                              1. 2

                                                                I knew this would be brought up; you know the effort they’ve had to do to achieve this? An incredible amount.

                                                                1. 8

                                                                  It’s 100K lines of code, and majority of it was developed over a 2-3 year period (with ongoing development to catch up with evolution of Rust). The number of commits and lines of code happens to be close to TCC:

                                                                  It does take a couple of shortcuts: it’s a Rust-to-C compiler (no machine code generation) and it doesn’t perform borrow checking (the Rust language is carefully designed to make it optional. Lifetimes are purely a compile-time lint, and don’t affect generated code or its behavior).

                                                                  I think overall in terms of implementation difficulty Rust is somewhere between C and C++. Parsing of Rust is much simpler than C++, and Rust has fewer crufty language features than C++ (there’s one way to initialize a variable), but some features are big-ish (borrow checker, type inference).

                                                                  How hard it is to implement mainly depends on how good quality of implementation you want to have. For example, LLVM is 85× larger than mrustc and tcc, with over 130× more commits. It’s a 20-year collaborative effort, likely not possible to do by a single person. The main rustc project is also maximalist like that, because it isn’t merely an effort to get Rust working, but to make it fast, efficient, user-friendly, well-documented, reliable, portable, etc., so much much more work went into it beyond just the language implementation.

                                                                  1. 2

                                                                    I cannot speak for mrustc, but 100k loc for tcc is bullshit. Just counting sources and headers in the top level, I get 55k loc (the remainder is taken up by tests and win32 headers). Close to 40k is taken up by target-specific code. The core compiler is about 10k loc.

                                                                    1. 1

                                                                      openhub stats I’ve quoted are for the whole repo, and I see 57K .c and 38K .h in there. This includes tests, so it’s indeed more than just the compiler.

                                                                      1. 2

                                                                        If I run a word count on everything in the ‘src’ directory of mrustc, I get about 130k loc. I therefore conclude that mrustc’s rust compiler is approximately 10x larger and more complex than tcc’s c compiler. Recall that tcc also includes assemblers and linkers, and supports many targets.

                                                                    2. 0

                                                                      I mean if 3 years is not a lot of effort then cheers to you! You must be an absolute coding beast.

                                                                      1. 15

                                                                        I feel like this is a fairly disingenuous and dismissive argument - your original post stated that “Rust is too complicated to implement by one person.” The comment you were responding to was making the point that not only is there an implementation of Rust by primarily one person, but a single-contributor C implementation is a comparable size and would theoretically take a similar amount of effort to implement. People here aren’t trying say it’s not a lot of effort, but that it does exist and you may be trivializing the amount of effort needed for a C implementation.

                                                                        1. 3

                                                                          Sorry, I didn’t mean to dismiss anything! Isn’t the statement still true if it’s been mentioned they still got help?… Regardless the general sentiment is right. I should have said instead that it’s not reasonable!

                                                                          I may very well be trivializing the effort for a C implementation. In my mind C’s type system, lack of borrow checker, and other features make its implementation maybe a magnitude easier. I could be completely wrong though and please elaborate if that’s the case!

                                                                          1. 4

                                                                            A non-optimizing C89 or C90 compiler is relatively simple to implement, with only minor inconveniences from the messy preprocessor, bitfields, parsing ambiguities of dangling else and typedef (did you know it can be scoped and nested and this affects syntax around it!?). The aren’t any things that are hard per-se, mostly just tedious and laborious, because there’s a lot of small quirks underneath the surface (e.g. arrays don’t always decay to pointers, sizeof evaluates things differently, there are rules around “sequence points”).

                                                                            There are corners of C that most users don’t use, but compiler in theory needs to support, e.g. case doesn’t have to be at the top level of switch, but can be nested inside other arbitrary code. C can generate “irreducible” control flow, which is hard to reason about and hard to optimize. In fact, a lot of optimization is pretty hard due to aliasing, broken const, and the labyrinth of what is and isn’t UB described in the spec.

                                                                            1. 3

                                                                              There are corners of C that most users don’t use, but compiler in theory needs to support, e.g. case doesn’t have to be at the top level of switch, but can be nested inside other arbitrary code

                                                                              It’s worth noting that, since you said ‘non-optimising’ these things are generally very easy in a non-optimising compiler. You can compile C more or less one statement at a time, including case statements, as long as you are able to insert labels after you insert a jump to them (which you can with most assembly languages). Similarly, sequence points matter only if you’re doing more than just evaluating expressions as you parse them.

                                                                              The original C compiler ran on a computer that didn’t have enough memory for a full parsed AST and so the language had to support incremental code generation from a single-pass compiler.

                                                                2. 9

                                                                  LLVM was originally just Chris Latner. I think the question isn’t “Can one person build it?” It’s “Can one person build it to the point where it has enough value for other people to work on it too?”

                                                                  1. 5

                                                                    LLVM was originally just Chris Latner

                                                                    Several of the folks in / formerly in Vikram Adve’s group at UIUC would be quite surprised to learn that.

                                                                    1. 1

                                                                      I actually looked at Wikipedia first before my comment, but that made it seems like it was Latner’s project under Adve’s mentorship. I’ll take your word for it that it was a group effort from the start.

                                                                  2. 3

                                                                    This was my first thought as well. There are a lot of very useful things that are too complicated to be implemented by one person - the current state of Linux probably falls into that category, and I know that at least I wouldn’t want to go back to even a version from 5 years ago, much less back to a version that could have been implemented by a single person.

                                                                    1. 2

                                                                      …And there are a lot of useful things that are simple enough for one person to implement! :D

                                                                      1. 3

                                                                        Ha, I agree with that, was mostly just highlighting that I don’t feel like “too complicated to implement by one person” is a good reason to dismiss Rust’s potential usefulness.

                                                                        For myself, I originally got frustrated with Rust not allowing me to do things; eventually, I realized that it was statically removing bad habits that I’d built in the past. Now I love when it yells at me :)

                                                                    2. 1

                                                                      [Tool] is too complicated to implement by one person.

                                                                      I’m not sure that’s a practical metric by which to judge a tool

                                                                      I am. Short term, that means the tool will cost much less: less time to make, fewer bugs, more opportunities for improvement. Long term it means other people will be able to rebuild it from scratch if they need to. At a lower cost.

                                                                      1. 3

                                                                        The flip side of this is that the tool will do much less. A wooden hammer is a tool that a single person can make. A hammer with a steel head that can drive in nails requires a lot more infrastructure (smelting the ore and casting the head are probably large enough tasks that you’ll need multiple people before you even get to adding a wooden handle). An electric screwdriver requires many different parts made in different factories. If I want to fix two pieces of wood together than a screw driven by an electric screwdriver is both easier to use and produces a much better result than a nail driven by a wooden hammer.

                                                                        1. 1

                                                                          Obviously I was limiting my analysis to software tools, where the ability of a single person to make it is directly tied to its complexity.

                                                                          One fair point you do have is how much infrastructure the tool sits upon. Something written in Forth needs almost nothing besides the hardware itself. Something written in Haskell is a very different story. Then you need to chose what pieces of infrastructure you want to depend on. For instance, when I wrote my crypto library I chose C because of it’s ubiquity. It’s also a guarantee of fairly extreme stability. There’s a good chance that the code I write now will still work several decades from now. If I wanted to maximise safety instead, I would probably have picked Rust.

                                                                          1. 4

                                                                            Obviously I was limiting my analysis to software tools, where the ability of a single person to make it is directly tied to its complexity.

                                                                            My point still applies. A complex software tool allows me to do more. In the case of a programming language, a more complex compiler allows me to write fewer bugs or more features. The number of bugs in the compiler may be lower for a compiler written by a single person but I would be willing to bet that the number of bugs in the ecosystem is significantly higher.

                                                                            The compiler and standard library are among the best places for complexity in an ecosystem because the cost is amortised across a great many users and the benefits are shared similarly. If physical tools were, like software, zero marginal cost goods, then nail guns, pillar drills, band saws, and so on would all be ubiquitous. If you tried to make the argument that you prefer a manual screwdriver to an electric one because you could build one yourself if you needed then you’d be laughed at.

                                                                            For instance, when I wrote my crypto library I chose C because of it’s ubiquity. It’s also a guarantee of fairly extreme stability

                                                                            It also gives you absolutely no help in writing constant-time code, whereas a language such as Low* allows you to prove constant-time properties at the source level. The low* compiler probably depends on at least a hundred person-years of engineering but I’d consider it very likely that the EverCrypt implementations of the same algorithms would be safer to use than your C versions.

                                                                            1. 2

                                                                              I reckon amortized cost is a strong argument. In a world where something is build once and used a gazillion times the cost analysis is very different from something that only has a couple users. Which is why by the way I have a very different outlook for Oberon and Go: the former were used in a single system, and the cost of a more powerful compiler could easily outweigh the benefits across the rest of the system; while Go set out to be used by a gazillion semi-competent programmers, and the benefit of some conspicuously absent features would be multiplied accordingly.

                                                                              Honestly, I’m not sure where I stand. For the things I make, I like to keep it very very simple. On the other hand, If I’m being honest with myself I have little qualms sitting on a mountain of complexity, provided such foundation is solid enough.

                                                                              Do you have a link to Low*? My search engine is failing me.

                                                                              1. 2

                                                                                Do you have a link to Low*? My search engine is failing me.

                                                                                This paper is probably the best place to start

                                                                      2. 1

                                                                        The C compilers that provide a practical foundation for modern software development were not implemented by one person either.

                                                                        Right but there are many C compilers which were written by one person and still work. To me, that’s the important part. Thank you for your thoughts!

                                                                        1. 2

                                                                          Why is that important?

                                                                          1. 1

                                                                            It’s important because fast forward 300 years and no one uses your language anymore. It must be reasonable the future humans can write a compiler on their own if they want to run your program.

                                                                            I’m really trying to encourage people thinking beyond their lives in the software realm lately, just as we need to do the same for the ecosystem.

                                                                            1. 3

                                                                              trying to build software to last 300 years seems like it would limit hardware development
                                                                              and everyone implements C compatibility in their new hardware so that people will use it
                                                                              if people can figure out quantum computers and computers not based on binary, they’ll probably need to figure out what the next C will be for that new architecture
                                                                              if you want your software to last 300 years, write it in the most readable and easy-to-understand manner, and preserve it’s source so people can port it in the future

                                                                              1. 3

                                                                                And this is why C is not good for longevity, but languages which are more abstracted. Thank you for that! Completely agree with what you’re thinking here.

                                                                                1. 3

                                                                                  i don’t think the biggest blockers to software longevity is language choices or even hardware, it’s the economy/politics of it… long lasting anything doesn’t fit in well with our throw-away society, and since it can’t be monetized, the capitalist society snubs it’s nose at it

                                                                                  1. 1

                                                                                    Hehe, an interesting thread of thought we could travel down here. I’ll just say I agree to a degree.

                                                                              2. 3

                                                                                It’s important because fast forward 300 years and no one uses your language anymore. It must be reasonable the future humans can write a compiler on their own if they want to run your program.

                                                                                If you’re considering a person 300 years in the future then you should also consider that they will have tools 300 years more advanced than ours. 30 years ago, writing a simple game like space invaders was weeks worth of programming, now it’s something that you can do in an afternoon, with significantly better graphics. In the same time, parser generators have improved hugely, reusable back ends are common, and so on. In 300 years, it seems entirely feasible that you’d be able to generate a formal specification for a language from a spoken description and synthesise an optimising compiler directly from the operational semantics.

                                                                                1. 1

                                                                                  You’re right, I haven’t considered this! I don’t know what to say immediately other than I think this is very important to think about. I’d like to see what others have to comment on this aspect too…!

                                                                                  1. 1

                                                                                    you should also consider that they will have tools 300 years more advanced than ours.

                                                                                    Unless there has been a collapse in between. With climate change and peak oil, we have some serious trouble ahead of us.

                                                                                    1. 4

                                                                                      In which case, implementing the compiler is one of the easiest parts of the problem. I could build a simple mechanical computer that could execute one instruction every few seconds out of the kind of materials that a society with a Victorian level of technology could produce, but that society existed only because coal was readily accessible. I’ve seen one assessment that said that if the Victorians had needed to use wood instead of coal to power their technology they’d have completely deforested Britain in a year. You can smelt metals with charcoal, but the total cost is significantly higher than with coal (ignoring all of the climate-related externalities).

                                                                                      Going from there to a transistor is pretty hard. A thermionic valve is easier, but it requires a lot of glass blowing (which, in turn, requires an energy-dense fuel source such as coal to reach the right temperatures) and the rest of a ‘50s-era computer required fairly pure copper, which has similar requirements. Maybe a post-collapse civilisation would be lucky here because there’s likely to be fairly pure copper lying around in various places.

                                                                                      Doping silicon to produce integrated circuits requires a lot of chemical infrastructure. Once you can do that, the step up to something on the complexity of a 4004 is pretty easy but getting lithography to the point where you can produce an IC powerful enough to run even a fairly simple C program is nontrivial. Remember that C has a separate preprocessor, compiler (which traditionally had a separate assembler), and linker because it was designed for computers that couldn’t fit more than one of those in RAM at a time. Even those computers were the result of many billions of dollars of investment from a society that already had mass production, mass mining, and large-scale chemistry infrastructure.

                                                                                      C code today tends to assume megabytes of RAM, at a minimum. Magnetic core storage could do something like 1 KiB in something the size of a wardrobe. Scaling up production to the point where 1 MiB is readily available requires ICs, so any non-trivial C program is going to have a dependency on at least ’80s-era computing hardware.

                                                                                      TL;DR: If a society has collapsed and recovered to the point where it’s rediscovering computers, writing a compiler for a fairly complex language is going to be very low cost in comparison to building the hardware that the compiler can target.

                                                                                      1. 1

                                                                                        Well, I wasn’t anticipating such a hard collapse. I was imagining a situation where salvage is still a thing, or where technology doesn’t regress that far. Still, you’re making a good point.

                                                                                        1. 4

                                                                                          That’s an interesting middle ground. It’s hard for me to imagine a scenario in which computers are salvageable but storage is all lost to the point where a working compiler is impossible to find. At the moment, flash loses its ability to hold charge if not powered for a few years but spinning rust is still fine, as is magnetic tape, for a much longer period, so you’d need something else to be responsible for destroying them. Cheap optical storage degrades quite quickly but there are archive-quality disks that are rated for decades. If anything, processors and storage are more fragile.

                                                                                          In the event of a collapse of society, I think it’s a lot more likely that copies of V8 would survive longer than any computer capable of running them. The implicit assumption in the idea that the compiler would be a bottleneck recovering from a collapse of society is that information is more easily destroyed than physical artefacts. This ignore the fact that information is infinitely copyable, whereas the physical artefacts in question are incredibly complex and have very tight manufacturing tolerances.

                                                                                          Of course, this is assuming known threats. It’s possible that someone might create a worm that understands a sufficiently broad range of vulnerabilities that it propagates into all computers and erases all online data. If it also propagates into the control systems for data warehouses then it may successfully destroy a large proportion of backups. Maybe this could be combined with a mutated bacterium that ate something in optical disks and prevented recovering from backup DVDs or whatever. Possibly offline storage will completely go out of fashion and we’ll end up with all storage being some form of RAM that is susceptible to EMP and have all data erased by a solar flare.

                                                                                          1. 1

                                                                                            It really depends on what we can salvage, and what chips can withstand salvage operations. In a world where we stop manufacturing computers (or at least high-end chips), I’d expect chips to fail over the years, and the most complex ones will likely go first. And those that don’t will be harder to salvage for various reasons: how thin their connection pins are, ball arrays, multi-layer boards requirements, and the stupidly fast rise times that are sure to cause cross-talk and EMI problems with the hand made boards of a somewhat collapsed future.

                                                                                            In the end, many of us may be stuck with fairly low-end micro controllers and very limited static memory chips (forget about controlling DRAM, it’s impossible to do even now without a whole company behind you). In that environment, physical salvage is not that horrible, but we’d have lost enough computing power that we’ll need custom software for it. Systems that optimise for simplicity, like Oberon, might be much more survivable in this environment.

                                                                                            C code today tends to assume megabytes of RAM, at a minimum.

                                                                                            In this hypothetical future, that is relevant indeed. Also, I believe you. But then the first serious project I wrote in C, Monocypher, requires only a couple KB of stack memory (no heap allocation) for everything save password hashing. The compiled code itself fits requires less than 40KB of memory. Thing is, I optimised it for simplicity and speed, not for memory usage (well, I did curb memory use a bit when I’ve heard I had embedded users).

                                                                                            I suspect that when we optimise for simplicity, we also tend to use less resources as a side effect.


                                                                                            Now sure, those simple systems will take no time to rebuild from scratch… if we have the skills. In our world of bigger and faster computers with a tower of abstraction taller than the Everest, I feel most of us simply don’t have those skills.

                                                                                            1. 4

                                                                                              Now sure, those simple systems will take no time to rebuild from scratch… if we have the skills. In our world of bigger and faster computers with a tower of abstraction taller than the Everest, I feel most of us simply don’t have those skills.

                                                                                              While it’s an interesting thought exercise, but I think this really is the key point. The effort in salvaging a working compiler to be able to run some tuned C code in a post-apocalyptic future may be significantly higher than just rewriting it in assembly for whatever system you were able to salvage (and, if you can’t salvage an assembler, you can even assemble it by hand after writing it out on some paper. Assuming cheap paper survives - it was very expensive until a couple of hundred years ago).

                                                                                              Most of us probably don’t have the skills to reproduce the massive towers of abstraction that we use today from scratch but my experience teaching children and young adults to program suggests that learning to write simple assembly routines is a skill that a large proportion of the population could pick up fairly easily if necessary. If anything, it’s easier to teach people to write assembly for microcontrollers than JavaScript for the web because they can easily build a mostly correct mental model of how everything works in the microcontroller.

                                                                                              Perhaps more importantly, it’s unlikely that any software that you write now will solve an actual need for a subsistence level post-apocalyptic community. They’re likely to want computers for automating things like irrigation systems or monitoring intrusion sensors. Monocypher is a crypto library that implements cryptosystems that assume an adversary who had access to thousands of dedicated ASICs trying to crack your communications. A realistic adversary in this scenario would struggle to crack a five-wheel Enigma code and that would be something that you could implement in assembly in a few hours and then send the resulting messages in Morse code with an AM radio.

                                                                                              1. 1

                                                                                                Most of us probably don’t have the skills to reproduce the massive towers of abstraction that we use today from scratch but my experience teaching children and young adults to program suggests that learning to write simple assembly routines is a skill that a large proportion of the population could pick up fairly easily if necessary.

                                                                                                I feel a little silly for not having thought of that. Feels obvious in retrospect. If people who have never programmed can play Human Resource Machine, they can probably learn enough assembly to be useful.

                                                                                                Perhaps more importantly, it’s unlikely that any software that you write now will solve an actual need for a subsistence level post-apocalyptic community.

                                                                                                Yeah, I have to agree there.

                                                                                  2. 2

                                                                                    Today’s humans were able to create Rust, so I don’t see why future humans wouldn’t. Future humans will probably just ask GPT-3000 to generate the compiler for them.

                                                                                    If you’re thinking about some post-apocalyptic scenario with a lone survivor rebuilding the civilisation, then our computing is already way beyond that. In the 1960’s you were able to hand-stitch RAM, but to even hold source code of modern software, let alone compile and run it, you need more technology than a single person can figure out.

                                                                                    C may be your point of reference, because it’s simple by contemporary standards, but it wasn’t a simple language back when the hardware was possible to comprehend by a single person. K&R C and single-pass C compilers for PDP-11 are unusable for any contemporary C programs, and C is too complex and bloated for 8-bit era computers.

                                                                                    1. 1

                                                                                      If GPT can do that for us then hey, I will gladly gladly welcome it. I’m not thinking about a post-apocalyptic scenario but I can see the relationship to it.

                                                                                    2. 1

                                                                                      I would also be careful about timespans here. computers haven’t been around for a century yet, so who knows what things will be like 100 years from now? I don’t even know if it’s possible to emulate an ENIAC and run old punch card code on modern hardware, that’s the sort of change we’ve seen in just 75y. maybe multicore x86 machines running windows/*nix/BSD will seem similarly arcane 300y from now.

                                                                                      1. 1

                                                                                        But why one person? I think we’ll still write software in teams in 2322, if we write software at all by that point instead of flying spaceships and/or farming turnips in radioactive wastelands. The software was written by teams today, and I think, if it needs to be rewritten, it will be rewritten by teams in the future.

                                                                                    3. 1

                                                                                      Wouldn’t a published standard be more important to future programmers? Go might be a wonderful language, but is there a standards document I can read from which an implementation can be written from?

                                                                                  1. 8

                                                                                    I wish people wouldn’t refer to things like this as problems with Linux. The problems in the article have almost nothing to do with Linux:

                                                                                    • GNOME and KDE are both cross-platform desktop environments and the same problems exist whether you use them on Linux, *BSD, Solaris, or anything else.
                                                                                    • Android, which is (more or less) exclusively a Linux platform, and the most widely deployed Linux client platform by a few orders of magnitude, does not have these problems.
                                                                                    1. 9

                                                                                      Well, if distributions call themselves “Linux distros”, then there is no real visibility of it for casual users. Calling this problem out as “Every major Linux UI framework Accessibility: an unmaintained Mess” does not make things any better.

                                                                                      1. 4

                                                                                        Except that it’s not ‘Every major Linux UI framework’. The mostly widely deployed Linux UI framework is on Android and it does not have these issue. There are over a billion Android devices in circulation. The combined number of KDE and GNOME installs is less than 1% of this. If you scope your complaint to Linux, you’re talking about an obscure niche. If you scope it to open source desktop environments, nothing you’re saying is specific to Linux.

                                                                                        1. 5

                                                                                          I think calling Android’s amalgamation of a kernel Linux is doing Linux a disservice. Nor can you really call Android a UI framework. It’s an OS that is based on Linux at most. But in reality, this doesn’t matter anyways, since the title brings the message it has to anyways.

                                                                                          1. 5

                                                                                            To be honest, this is why I consider “GNU/Linux” an actually useful distinction and not GNU/Pedantry.

                                                                                            1. 4

                                                                                              I think calling Android’s amalgamation of a kernel Linux is doing Linux a disservice.

                                                                                              Linux is a trademark filed by Linus Torvalds and currently held by the Linux Foundation. It refers, specifically, to the kernel originally created by Linus Torvalds. Android is as much Linux as Ubuntu is Linux (i.e. a small amount, in both cases).

                                                                                              Nor can you really call Android a UI framework. It’s an OS that is based on Linux at most.

                                                                                              I am not sure what the name of the UI framework that Android ships is called, or even if it has a separate name from ‘Android’. So far, most attempts to run Android’s userspace on non-Linux operating systems have failed. Android’s app model is far more closely tied to Linux than GNOME or KDE. It is fair to call Android’s UI framework a Linux UI frameowkr because it is intimately tied to Linux-specific APIs and is not portable. It is far less fair to call GNOME or KDE Linux UIs because they are both portable codebases that have run on *BSD, Solaris, HURD (GNOME, at least) and even Windows (KDE natively, GNOME via X11).

                                                                                              But in reality, this doesn’t matter anyways, since the title brings the message it has to anyways.

                                                                                              The title is a lazy conflation, which both dilutes the Linux trademark by treating the word ‘Linux’ as a generic term and which erases non-Linux open-source operating systems that run exactly the same GUI stacks.

                                                                                        2. 5

                                                                                          I think that is quite a disingenuous take.

                                                                                          Do you get a more accessible Linux by not installing Gnome or KDE?

                                                                                          What is the Linux accessibility story if it is separate from Gnome or KDE?

                                                                                          If Android is a »Linux platform«, is iOS a BSD platform and does that mean that BSD has the industry’s best accessibility?

                                                                                          I would say that any accessibility afforded by Android is despite Linux, or totally unrelated to any work made by Linux’s accessibility experts.

                                                                                          1. 3

                                                                                            Do you get a more accessible Linux by not installing Gnome or KDE?

                                                                                            Yes, Linux’s pseudoterminal support works very well with braille terminals, for example.

                                                                                            What is the Linux accessibility story if it is separate from Gnome or KDE?

                                                                                            Braille or screen readers for the terminal, magnification in X compositing window managers, X11 screen reader interfaces.

                                                                                            If Android is a »Linux platform«, is iOS a BSD platform and does that mean that BSD has the industry’s best accessibility?

                                                                                            iOS is a BSD platform, yes, as is macOS. I’m not sure what the relevance is, because no one is writing articles about *BSD usability and talking only about iOS or only about GNOME/KDE on FreeBSD.

                                                                                            I would say that any accessibility afforded by Android is despite Linux, or totally unrelated to any work made by Linux’s accessibility experts.

                                                                                            Who are these ‘Linux’ accessibility experts? Linux is a kernel. Accessibility has almost nothing at all to do with the kernel.

                                                                                          2. 3

                                                                                            I don’t believe anyone who read the article in good faith would have come away with the confused belief that it was about poor accessibility in Linux-the-operating-system-kernel, or with the idea that somehow GNOME and KDE are somehow Linux-only.

                                                                                          1. 8

                                                                                            Reminds me of some IRC daemons doing case-insensitive comparisons for special characters and therefore treating nicknames like abcde|{} to be the same as ABCDE\[]. It’s indeed a property of the ASCII table and an XOR with 0x20 will flip those characters from one case to the other.

                                                                                            1. 18

                                                                                              Back before we had standardised 8-bit character sets with ASCII in the bottom half, we had standardised 7-bit character sets based on ASCII with “lesser used” characters replaced as needed. IRC was invented in Finland, and it turns out the ASCII variant used by Sweden and Finland replaces [\] with ÄÖÅ and {|} with äöå. So the IRC protocol defines those bytes as case-insensitively equal, and conforming implementations must do the same even you’re using an encoding that treats them as punctuation instead of letters.

                                                                                              1. 2

                                                                                                Thanks. I didn’t know that!

                                                                                                1. 1

                                                                                                  I think that applies to most ISO-646 variants.

                                                                                                  1. 1

                                                                                                    The encoding used by Microsoft in Japan replaced \ with ¥, so Japanese users came to expect paths to look like C:¥Windows¥system32¥ etc.

                                                                                                1. 1

                                                                                                  It’s so easy to forget to close a file or a connection

                                                                                                  That’s a design choice of the http.Client. I wrote an HTTP library that makes it impossible to forget to close the connection. It’s just a matter of accepting a callback. There’s no need for any special language features here.

                                                                                                  Unreleased mutexes

                                                                                                  Ditto.

                                                                                                  Missing switch cases

                                                                                                  I’ve never seen this error in production. Is it really a common mistake, or is it an easy thing for the compiler to enforce? There are Go linters for it, for example.

                                                                                                  Invalid pointer dereference

                                                                                                  This happens, but .unwrap() also panics. I think the main thing here is to force the programmer to think about it.

                                                                                                  Uninitialized variables

                                                                                                  This is a weird one to complain about Go for. They’re all initialized to zero.

                                                                                                  Data races

                                                                                                  There’s the Go race detector to detect this in tests, but yes, it’s nice to catch it sooner. Is it worth the cost in development time though?

                                                                                                  Hidden Streams

                                                                                                  I don’t understand this one. If you use a type wrong, it does the wrong thing. The io types are one of Go’s best features.

                                                                                                  1. 1

                                                                                                    I’ve never seen this error in production. Is it really a common mistake, or is it an easy thing for the compiler to enforce? There are Go linters for it, for example.

                                                                                                    It definitely is, or there wouldn’t be linters for it. Even a catch-all default case has caused errors in code for me. No t crashes, but logic errors or UX errors which can only be found through manual testing.

                                                                                                    but .unwrap() also panics

                                                                                                    That’s the entire point. I wish it were spelt !! as in Kotlin, since that sounds out more.

                                                                                                    They’re all initialized to zero

                                                                                                    This is nonsensical, and dangerous too. Just like a broad default case, it causes hard-to-detect logic errors.

                                                                                                    1. 1

                                                                                                      If you add a new field to a type, do you want to break all downstream libraries or not? The argument here seems to be, yes, break everyone dependent on your package because you want a new field. That’s a hard pill to swallow, IMO. So, if you don’t do that, what then? Go’s answer is, there’s a simple rule that all structs everywhere have the same zero value defaults, so if you add a new field, make it work with that default. Sometimes that ends up being awkward because you really want “BeSecure” to be true by default, so you name it “BeInsecure” so that it’s false by default, but for the most part it works out. If you really want to break your downstream, rename an existing field to WhateverV2 or only accept a V2Options struct. There are tons of ways of breaking code if that’s the goal.

                                                                                                      1. 1

                                                                                                        If you are adding a new field to a struct you’re vending to others, it is your responsibility to either communicate that it’s a breaking change, or vend a new struct, or add logic to handle a missing value. But you can’t conflate missing values with »zero« values. There is a difference between nil and an empty string, or nil and 0. For a particular type (dates, times, telephone numbers, addresses, music notes), a »zero« value will be pure nonsense or an error. This is something a sum type like Optional or Result can express, and it is an important distinction.

                                                                                                        1. 1

                                                                                                          Go has generics now, so nothing stops you from making a field Optional[time.Time] or whatever.

                                                                                                          1. 1

                                                                                                            How does that look without sum types?

                                                                                                            1. 1

                                                                                                              Optional doesn’t need a sum type. You have a value and a bool for validity. type Optional[T any] struct { value T; valid bool }.

                                                                                                              1. 1

                                                                                                                Does Result need a sum type?

                                                                                                                1. 1

                                                                                                                  No because Result is T, error. An arbitrary sum type of T1, T2, T3… would require sum types, which Go does not current have.

                                                                                                  1. 4

                                                                                                    “ As a user, you can force allow zooming”

                                                                                                    Isn’t this problem solved, then?

                                                                                                    1. 21

                                                                                                      No. Just because there’s an option to enable it, that doesn’t mean disabling it should be encouraged. Not everyone knows about the option, for one thing.

                                                                                                      1. 10

                                                                                                        You’ve identified a web browser UI design problem, which can be solved by the probably-less-than-double-digits number of teams developing popular web browsers, rather than by asking tens of millions of web content creators to change their behavior.

                                                                                                        1. 5

                                                                                                          Perhaps browser makers can treat it like a potentially undesirable thing. Similar to “(site) wants to know your location. Allow/Block” or “(site) tried to open a pop up. [Open it]”

                                                                                                          So: “(site) is trying to disable zooming. [Agree to Disable] [Never agree]” or similar.

                                                                                                        2. 8

                                                                                                          I think the better question is why can you disable this in the first place. It shouldn’t be possible to disable accessibility features, as website authors have time and time again proven to make the wrong decisions when given such capabilities.

                                                                                                          1. 3

                                                                                                            I mean, what’s an accessibility feature? Everything, roughly, is an accessibility feature for someone. CSS lets you set a font for your document. People with dyslexia may prefer to use a system font that is set as Dyslexie. Should it not be ok to provide a stylesheet that will override system preferences (unless the proper settings are chosen on the client)?

                                                                                                            1. 3

                                                                                                              Slippery slope fallacies aren’t really productive. There’s a pretty clear definition of the usual accessibility features, such as being able to zoom in or meta data to aid screen readers. Developers should only be able to aid such features, not outright disable them.

                                                                                                              1. 6

                                                                                                                I think this is a misunderstanding of what “accessibility” means. It’s not about making things usable for a specific set of abilities and disabilities. It’s about making things usable for ALL users. Color, font, size, audio or visual modality, language, whatever. It’s all accessibility.

                                                                                                              2. 1

                                                                                                                https://xkcd.com/1172/

                                                                                                                (That said, I don’t understand why browsers let sites disable zoom at all.)

                                                                                                            2. 6

                                                                                                              Hi. Partially blind user here - I, for one, can’t figure out how to do this in Safari on IOS.

                                                                                                              1. 3

                                                                                                                “Based on some quick tests by me and friendly people on Twitter, Safari seems to ignore maximum-scale=1 and user-scalable=no, which is great”

                                                                                                                I think what the author is asking for is already accomplished on Safari. If it isn’t, then the author has not made a clear ask to the millions of people they are speaking to.

                                                                                                                1. 4

                                                                                                                  I am a web dev dilettante / newbie, so I will take your word for it. I just know that more and more web pages make browsing them with my crazy pants busted eyes are becoming nearly impossible to view on mobile, or wildly difficult enough so as to be equivalent to impossible in any case :)

                                                                                                                  1. 4

                                                                                                                    And that is a huge accessibility problem. This zoom setting is a huge accessibility problem.

                                                                                                                    My point is that the solution to this accessibility problem (and almost all accessibility problems) is to make the browser ignore this setting, not to ask tens of millions of fallible humans to update literally trillions of web pages.

                                                                                                                    1. 4

                                                                                                                      As another partially blind person, I fully agree with you. Expecting millions of developers and designers to be fully responsible for accessibility is just unrealistic; the platforms and development tools should be doing more to automatically take care of this. Maybe if the web wasn’t such a “wild west” environment where lots of developers roll their own implementations of things that should be standardized, then this wouldn’t be such a problem.

                                                                                                                      1. 2

                                                                                                                        Agreed. Front end development is only 50% coding. The rest is design, encompassing UX, encompassing human factors, encompassing accessibility. You can’t apply an “I’m just a programmer” or “works on my machine” mindset when your code is running on someone else’s computer.

                                                                                                                        1. 2

                                                                                                                          Developers and designers do have to be responsible for accessibility. I’m not suggesting that we aren’t.

                                                                                                                          But very often, the accessibility ask is either “Hey, Millions of people, don’t do this” or “Hey, three people, let me ignore it when millions of people do this”. And you’re much better off lobbying the three people that control the web browsers to either always, or via setting, ignore the problem.

                                                                                                              1. 1

                                                                                                                Didn’t Gates once say that / was the stupidest idea after the other stupid idea of using \ for folder paths?

                                                                                                                1. 1

                                                                                                                  Stupid in what way?

                                                                                                                    1. 3

                                                                                                                      You are generally unkind and this kind of behavior is not welcome, appreciated, invited or tolerated on this site.

                                                                                                                      Take the sass elsewhere.

                                                                                                                1. 8

                                                                                                                  C does not provide maps, and when I really need one I can implement it in less than 200 lines (won’t be generic, though). Were I to design a language like Hare or Zig, providing an actual (hash) map implementation would be way down my list of priorities. Even if it belongs in the standard library, my first order of business would be to make sure we can implement that kind of things.

                                                                                                                  In fact, Go made a mistake when it provided maps directly without providing general purpose generics. That alone hinted at a severe lack of orthogonality. If maps have to be part of the core language, that means users can’t write one themselves. Which means they probably can’t write many other useful data structures. As Go authors originally did, you could fail to see that if the most common ones (arrays, hash tables…) are already part of the core language.

                                                                                                                  The most important question is not whether your language has maps. It’s whether we can add maps that really matters. Because if we can’t, there’s almost certainly a much more serious root cause, such as the lack of generics.

                                                                                                                  1. 11

                                                                                                                    I think this is too a one-sided debate. Generics have benefits and drawbacks (to argument from authority, see https://nitter.net/graydon_pub/status/1036279571341967360).

                                                                                                                    Go’s original approach of providing just three fundamental generic data structures (vec, map, and chan) definitely was a worthwhile experiment in language design, and I have the feeling that it almost worked.

                                                                                                                    1. 10

                                                                                                                      At this point I’d argue that the benefits of even the simplest version of generics (not bounded, template-style or ML-functor-style, whatever) are so huge compared to the downsides, that it’s just poor design to create a new statically typed language without them. It’s almost like creating a language without function calls.

                                                                                                                      Go finally fixed that — which doesn’t fix all the other design issues like zero values or lack of sum types — but their initial set of baked-in generic structures was necessary to make the language not unbearable to use. If they hadn’t baked these in, who would use Go at all?

                                                                                                                      1. 3

                                                                                                                        other design issues like zero values

                                                                                                                        Could you share more here? I agree about Go generics, but its zero values are one thing I miss when using other imperative languages. They’re less helpful in functional languages, but I even miss zero values when using OCaml in an imperative style.

                                                                                                                        1. 5

                                                                                                                          Zero values are:

                                                                                                                          • not always something that makes sense (what’s the 0 value for a file descriptor? an invalid file descriptor, is what. For a mutex? same thing.) The criticism in a recent fasterthanlime article points this out well: Go makes up some weird rules about nil channels because it has to, instead of just… preventing channels from being nil ever.
                                                                                                                          • error prone: you add a field to a struct type, and suddenly you need to remember to update all the places you create this struct
                                                                                                                          • encouraging bad programming by not forcing definition to go with declaration. This is particularly true in OCaml, say: there are no 0 values, so you always have to initialize your variables. Good imperative languages might allow var x = undefined; (or something like that) but should still warn you if a path tries to read before writing to the field.
                                                                                                                          1. 3

                                                                                                                            nitpick: Go’s sync.Mutex has a perfectly valid and actually useful zero value: an unlocked mutex.

                                                                                                                            That said, I broadly agree with you; some types simply do not have a good default, and the best solution is not to fudge it and require explicit initialization.

                                                                                                                            @mndrix, note that there is a middle ground that gives you the best of both worlds: Haskell and Rust both have a Default type class/trait, that can be defined for types for which it does makes sense. Then you can just write

                                                                                                                            (in haskell):

                                                                                                                            let foo = def
                                                                                                                              in ...
                                                                                                                            

                                                                                                                            (or rust):

                                                                                                                            let foo = Default::default();
                                                                                                                            

                                                                                                                            Note you can even write this in Go, it just applies to more types than it should:

                                                                                                                            func Zero[T any]() T {
                                                                                                                               var ret T
                                                                                                                               return ret
                                                                                                                            }
                                                                                                                            
                                                                                                                            // Use:
                                                                                                                            foo := Zero[T]()
                                                                                                                            

                                                                                                                            You could well define some mechanism for restricting this to certain types, rather than just any. Unfortunately, it’s hard for me to see how you could retrofit this.

                                                                                                                            1. 1

                                                                                                                              Thank you for the correction!

                                                                                                                            2. 2

                                                                                                                              not always something that makes sense (what’s the 0 value for a file descriptor? an invalid file descriptor, is what. For a mutex? same thing.)

                                                                                                                              Partially agreed on a mutex (though on at least some platforms, a 0 value for a pthread mutex is an uninitialised, unlocked, mutex and will be lazily initialised on the first lock operation). If you bias your fd numbers by one then a 0 value corresponds to -1, which is always invalid and is a useful placeholder, but your example highlights something very important: the not-present value may be defined externally.

                                                                                                                              I saw a vulnerability last year that was a direct result of zero initialisation, of a UID field. A zero value on *NIX means root. If you hit the code path that accidentally skipped initialising the field properly, then the untrusted thing would run as root. Similarly, on most *NIX systems (all that I know of, though POSIX doesn’t actually mandate this), fd 0 is stdin, which is (as you point out) a terrible default.

                                                                                                                              Any time you’re dealing with an externally defined interface, there’s a chance that either there is no placeholder value or there is a placeholder value and it isn’t 0.

                                                                                                                              1. 2

                                                                                                                                not always something that makes sense

                                                                                                                                Agreed. However, my experience is that zero values are sensible for roughly 90% of types, and Go’s designers made the right Huffman coding decision here.

                                                                                                                                The criticism in a recent fasterthanlime article points this out well: Go makes up some weird rules about nil channels because it has to, instead of just… preventing channels from being nil ever.

                                                                                                                                For anyone who comes along later, I think this is the relevant fasterthanlime article. Anyway, the behavior of nil and closed channels is well-grounded in the semantics of select with message passing, and quite powerful in practice. For me, this argument ends up favoring zero values, for channels at least.

                                                                                                                                you add a field to a struct type, and suddenly you need to remember to update all the places you create this struct

                                                                                                                                My experience has been that they’d all be foo: 0 anyway. Although in practice I rarely use struct literals outside of a constructor function in Go (same with records in OCaml) because I inevitably want to enforce invariants and centralize how my values are created. In both languages, I only have to change one place after adding a field.

                                                                                                                                by not forcing definition to go with declaration

                                                                                                                                The definition there but it’s implicit. I guess I don’t see much gained by having repetitive = 0 on each declaration, like I often encounter in C.

                                                                                                                                1. 1

                                                                                                                                  what’s the 0 value for a file descriptor?

                                                                                                                                  standard input

                                                                                                                                  1. 2

                                                                                                                                    I hope you’re not serious. I mean, sure, but it makes absolutely no sense whatsoever that leaving a variable uninitialized just means “use stdin” (if it’s still open).

                                                                                                                                    1. 1

                                                                                                                                      File descriptor 0 is standard input on unix systems. (Unless you close it and it gets reused, of course, leading to fun bugs when code expects it to be standard input.)

                                                                                                                                      1. 1

                                                                                                                                        As ludicrous as it would be, it would be a natural default value to have for careless language implementers, and before you know it users come to expect it. Even in C, static variables are all zero initialised and using one on read(2) would indeed read from standard input.

                                                                                                                                        I’m sure we can point out various language quirks or weird idioms that started out that way.

                                                                                                                                    2. 1

                                                                                                                                      A zero-value file descriptor is invalid, sure, but a zero-value mutex is just an unlocked mutex. Why would that be invalid?

                                                                                                                                    3. 1

                                                                                                                                      There is no “zero” postal code, telephone number or user input.

                                                                                                                                  2. 7

                                                                                                                                    Thing is, Go not only is statically typed, it is garbage collected.

                                                                                                                                    As such, it is quite natural for it to use heap allocation for (almost) everything, and compensate for this with a generational GC. Now things get a little more complicated if they want to support natively sized integers (OCaml uses 31/62-bit integers to have one bit to distinguish them from pointers, so the GC isn’t confused), but the crux of the issue is that when you do it this way, generics become dead simple: everything is a pointer, and that’s it. The size of objects is often just irrelevant. It may sometime be a problem when you want to copy mutable values (so one might want to have an implicit size field), but for mere access, since everything are pointers size does not affect the layout of your containing objects.

                                                                                                                                    This is quite different from C++ and Rust, whose manual memory management and performance goals kinda force them to favour the stack and avoid pointers. So any kind of generic mechanism there will have to take into account the fact that every type might have a different size, forcing them to go to a specialization based template, which may be more complex to implement (especially if they want to be clever and specialise by size instead of by type).

                                                                                                                                    What’s clear to me is that Go’s designers didn’t read Pierce’s Type and Programming Languages, and the resulting ignorance caused them to fool themselves into thinking generics were complicated. No they aren’t. It would have taken a couple additional weeks to implement them at most, and that would have saved time elsewhere (for instance they wouldn’t have needed to make maps a built in type, and pushed that out to the standard library).

                                                                                                                                    I have personally implemented a small scripting language to pilot a test environment. Plus it had to handle all C numeric types, because the things it tests were low level. I went for static typing for better error reporting, local type inference to make things easier on the user, and added an x.f() syntax and a simple type based static dispatch over the first argument to get an OO feel. I quickly realised that some of the functions I needed required generics, so I added generics. It wasn’t perfect, but it took me like 1 week. I know that generics are simple.

                                                                                                                                    The reason the debate there is so one sided is because Go should have had generics for the start. The benefits are enormous, the drawbacks very few. It’s not more complex for users who don’t use generics, generic data structures can still be used as if they were built in, it hardly complicates the implementation, and it improves orthogonality across the board.

                                                                                                                                    “LoL no generics” was the correct way to react, really.

                                                                                                                                    1. 7

                                                                                                                                      It still seems to me that you are overconfident in this position. That’s fanboyism from my side, but Graydon certainly read TAPL, and if Graydon says “there’s a tradeoff between expressiveness and cognitive load” in the context of Go’s generics, it does seem likely that there’s some kind of tradeoff there. Which still might mean “LoL no generics” is the truth, but not via a one-sided debate.

                                                                                                                                      Having covered meta issues, let me respond to specific points, which are all reasonable, but also are debatable :)

                                                                                                                                      First, I don’t think the GC/no-GC line of argument holds for Go, at least in a simple form. Go deliberately distinguishes value types and pointer types (up to having a dedicated syntax for pointers), so “generics are easy ‘cause everything is a pointer” argument doesn’t work. You might have said that in Go everything should have been a pointer, but that’s a more complex argument (especially in the light of Java trying to move away from that).

                                                                                                                                      Second, “It’s not more complex for users who don’t use generics,” – this I think is just in general an invalid line of argumentation. It holds in specific context: when you own the transitive closure of the code you are working with (handmade-style projects, or working on specific things at the base of the stack, like crypto libraries, alone or in a very small and tightly-knit teams). For “industrial” projects (and that’s the niche for Go) user’s simply don’t have the luxury of ignoring parts of the language. If you work on an average codebase with >10 programmers and >100k lines of code, the codebase will use everything which is accepted by the compiler without warnings.

                                                                                                                                      Third, I personally am not aware of languages which solve generics problem in a low cognitive-load way. Survey:

                                                                                                                                      C++ is obviously pretty bad in terms of complexity – instantiation-time errors, up-until-recently a separate “weird machine” for compile-time computations, etc.

                                                                                                                                      Rust – it does solve inscrutable instantiation-time errors, but at the cost of far-more complex system, which is incomplete (waiting for GATod), doesn’t compose with other language features (async traits, const traits), and still includes “weird machine” for compile-time evaluation.

                                                                                                                                      Zig. Zig is exciting – it fully and satisfactory solves the “weird machine” problem by using the same language for compile-time (including parametric polymorphism) and run-time computation. It’s also curious in that, as far as I understand, it also essentially “ignores TAPL” – there are no generics in the type system. It, however, hits “instantiation time errors” on the full speed. It seems to be that building tooling for such a language would be pretty hard (it would be anti-go in this sense). But yeah, Zig so far for me is one of the languages which might have solved generics.

                                                                                                                                      Haskell – it’s a bit hard to discuss what even is the generics impl in Haskell, as it’s unclear which subset of pragmas are we talking about, but I think any subset generally leads to galactic-brain types.

                                                                                                                                      OCaml – with questionable equality semantics, and modular implicit which are coming soon, I think it’s clear that generics are not solved yet. It also has a separate language for functors, which seems pretty cognitively complex.

                                                                                                                                      Scala – complexity-wise, I think it’s a super-set of both Haskell and Java? I don’t have a working memory of Scala to suggest specific criticisms, but I tend to believe “Scala is complex” meme. Although maybe it’s much better in Scala 3?

                                                                                                                                      Java – Java’s type system is broken in a trivial way (covariant arrays) and a (couple of?) interesting way (they extended type-inference when they added lambdas, and that inference allowed materialization of some un-denotable types which break the type system). I am also not sure that its “LoL covariant arrays”, as some more modern typed languages also make this decision, citing reduction of cognitive load. And variance indeed seems to be quite complex topic – Effective Java (I think?) spends quite some pages explaining “producer extends, consumer super”.

                                                                                                                                      C# – I know very little about C#, but it probably can serve as a counter example for “generics in GC languages are simple”. Like Go, C# has value types, and, IIRC, it implements generics by just-in-time monomorphisation, which seems to be quite a machinery.


                                                                                                                                      Now, what I think makes these systems complicated is not just parametric polymorphism, but bounded quantitication. The desire to express not only <T>, but <T: Ord>. Indeed, to quote TAPL,

                                                                                                                                      This chapter introduces bounded quantification, which arises when polymorphism and subtyping are combined, substantially increasing both the expressive power of the system and its metatheoretic complexity.

                                                                                                                                      I do think that there’s an under-explored design space of non-bounded generics, and very much agree with @c-cube . I am not quite convince that it would work and that Go should have been SML without functors and with channels, but that also doesn’t seem obviously worse than just three generic types! The main doubt for me is that having both interfaces and unbounded generics feels weird. But yeah, once I have spare time for implementing a reasonable complete language, unbounded generics is what I’d go for!

                                                                                                                                      EDIT: forgot Swift, which absolutely tops my personal chart of reasons why adding generics to a language is not a simple matter: https://forums.swift.org/t/swift-type-checking-is-undecidable/39024.

                                                                                                                                      1. 1

                                                                                                                                        Graydon certainly read TAPL

                                                                                                                                        Ah. that’s one hypothesis down then. Thanks for the correction.

                                                                                                                                        It holds in specific context: when you own the transitive closure of the code you are working with (handmade-style projects, or working on specific things at the base of the stack, like crypto libraries, alone or in a very small and tightly-knit teams). For “industrial” projects (and that’s the niche for Go) user’s simply don’t have the luxury of ignoring parts of the language. If you work on an average codebase with >10 programmers and >100k lines of code, the codebase will use everything which is accepted by the compiler without warnings.

                                                                                                                                        OK, while I do have some experience with big projects, my best work by far was in smaller ones (including my crypto library, which by the way did not even need any generics to implement). What experience I do have with bigger projects however have shown me that most of the time (that is, as long as I don’t have to debug something), the only pieces of language I have to care about are those used for the API of whatever I’m using. And those tend to be much more reasonable than whatever was needed to implement them. Take the C++ STL for an extreme example: when was the last time you actually specified the allocator of a container? Personally I’ve never done it in over 10 years being paid to work with C++.

                                                                                                                                        I personally am not aware of languages which solve generics problem in a low cognitive-load way

                                                                                                                                        I have written one. Not public, but that language I’ve written for test environments? It had generics (unbounded, no subtyping), and I didn’t even tell my users. I personally needed them to write some of the functions of the standard library, but once that was done, I thought users would not really need them. (Yeah, it was easier to add full blown generics than having a couple ad-hoc generic primitives.)

                                                                                                                                        Now, what I think makes these systems complicated is not just parametric polymorphism, but bounded quantitication. The desire to express not only <T>, but <T: Ord>.

                                                                                                                                        Yeah, about subtyping…

                                                                                                                                        In the code I write, which I reckon has been heavily influenced by an early exposure to OCaml (without the object part), I almost never use subtyping. Like, maybe 3 or 4 times in my entire career, two of which in a language that didn’t have closures (C++98 and C). If I design a language for myself, subtyping will be way down my list of priorities. Generics and closures will come first, and with closures I’ll have my poor man’s classes in the rare cases I actually need them. Even in C I was able to add virtual tables by hand that one time I had to have subtype polymorphism (It’s in my crypto library, it’s the only way I found to support several EdDSA hashes without resorting to a compilation flag).

                                                                                                                                        I’ve heard of subtyping being successfully used elsewhere. Niklaus Witrth took tremendous advantage of it with its Oberon programming language and operating system. But I believe he didn’t have parametric polymorphism or closures either, and in this case I reckon subtyping & class based polymorphism are an adequate substitute.

                                                                                                                                        Type classes (or traits) however are really enticing. I need more practice to have a definite opinion on them, though.

                                                                                                                                        1. 2

                                                                                                                                          Ah. that’s one hypothesis down then. Thanks for the correction.

                                                                                                                                          To clarify, graydon, as far as I know, didn’t participate in Go’s design at all (he designed Rust), so this has no bearing on your assumption about designers of Go.

                                                                                                                                          1. 2

                                                                                                                                            Crap! Well, at least his argument has value.

                                                                                                                                      2. 2

                                                                                                                                        If generics are simple, can you read this issue thread and tell everyone else how to fix comparable to be consistent? TIA.

                                                                                                                                        1. 2

                                                                                                                                          Generics are simple under a critical condition: Design them from the ground up

                                                                                                                                          Done after the fact in a system not designed for them, of course they’re going to be difficult. Then again, why waste a couple weeks of up front design when you can afford years of waiting and months of pain?

                                                                                                                                          1. 3

                                                                                                                                            I don’t agree that generics are simple, but I agree that designing them in from the start is vastly easier than trying to retrofit them to an existing language. There are a lot of choices in how generics interact with your type system (especially in the case of a structural type system, which Go has, and even more so in an algebraic type system). If you build generics in as part of your type system from the start then you can explore the space of things allowed by generics and the other features that you want. If you don’t, then you may find that the point that you picked in the space of other things that you want is not in the intersection of that and generics.

                                                                                                                                            1. 0

                                                                                                                                              I don’t agree that generics are simple

                                                                                                                                              I kinda had to change my mind on that one. Generics can be very simple in some contexts (like my own little language), but I see now that Go wasn’t one of them.

                                                                                                                                              If you build generics in as part of your type system from the start then you can explore the space of things allowed by generics and the other features that you want. If you don’t, then you may find that the point that you picked in the space of other things that you want is not in the intersection of that and generics.

                                                                                                                                              One thing I take for granted since I went out of college in 2007, is that we want generics. If your language is even slightly general purpose, it will need generics. Even my little specialised language, I didn’t plan to add generics, but some functions in my standard library required it. So this idea of even attempting to design a language without generics feels like an obvious waste of time to me. Likewise for closures and sum types by the way.

                                                                                                                                              There is one thing that can make me change my mind: systematic experimentation for my niche of choice. I design my language, and I write a real program with it, trying to use as few features as possible. For instance, my experience in writing a cryptographic library in C convinced me they don’t need generics at all. Surprisingly though, I did see a case for subtype polymorphism, in some occasions, but that happens rarely enough that an escape hatch like writing your vtable by hand is good enough. I believe Jonathan Blow is doing something similar for his gaming language, and Niklaus Wirth definitely did the same for Pascal (when he devised Modula and Oberon).

                                                                                                                                              1. 2

                                                                                                                                                One thing I take for granted since I went out of college in 2007, is that we want generics. If your language is even slightly general purpose, it will need generics

                                                                                                                                                I completely agree here. I wrote a book about Go that was finished just after Go reached 1.0 and, even then, I thought it was completely obvious that once you’ve realised that you need generic maps you should realise that you will need other generic types. Having maps (and arrays and slices) as special-case generics felt like a very bad decision.

                                                                                                                                                Mind you, I also thought that having a language that encouraged concurrency and made data races undefined behaviour, but didn’t provide anything in the type system to allow you to define immutable types or to limit aliasing was also a terrible idea. It turns out that most of the folks I’ve spoken to who use Go use it as a statically compiled Python replacement and don’t use the concurrency at all. There was a fantastic paper at ASPLOS a couple of years back that looked at concurrency bugs in Go and found that they are very common.

                                                                                                                                                1. 1

                                                                                                                                                  I think Oberon and Go have a lot in common. Both were designed by an experienced language constructor late in his career, emphasising “simplicity” over everything else, leaving out even basic things such as enums, even though he had created more expressive languages earlier on.

                                                                                                                                                  1. 2

                                                                                                                                                    I tend to be more sympathetic to Wirth’s decisions, because he was working in a closed ecosystem he completely controlled and understood. I mean, they were like less than 5 people working on the entire OS + compiler + main applications, and in the case of the Lilith computer, and FPGA Oberon, even the hardware!

                                                                                                                                                    He could have criteria such as “does this optimisation makes the entire compiler bootstrap faster? Here it’s for speed, but I can see the idea that every single piece of complexity has to pay for itself. It was not enough for a feature to be beneficial, the benefits had to outweigh the costs. And he could readily see when they did, because he could fit the entire system in his head.

                                                                                                                                                    Go is in a different situation, where from the start it was intended for a rather large audience. Thus, the slightest benefit to the language, even if it comes at a significant up-front cost, is liable to pay huge dividends as it becomes popular. So while I can believe even generics may not be worth the trouble for a 10K LOC system (the size of the Oberon system), that’s a different story entirely when people collectively write hundreds of millions of lines of code.

                                                                                                                                                    1. 2

                                                                                                                                                      The best characterisation of Go and Oberon I can come up with is »stubborn«.

                                                                                                                                          2. 2

                                                                                                                                            While Go is garbage collected, it is still very much “value oriented” in that it gives control to the programmer for the layout of your memory, much like C++ and Rust. Just making everything a pointer and plugging your ears to the issues that brings isn’t solving the problem.

                                                                                                                                            I’m glad that you added generics to your small scripting language for a test environment in 1 week. I don’t think that speaks much to the difficulty in adding it to a different language with a different type system and different goals. When they started the language, things like “very fast compile times” where very high priority, with the template system of C++ generics heavily inspiring that goal. It would be inane to start a project to avoid a problem in a language and then cause the exact same problem in it.

                                                                                                                                            So, they didn’t want to implicitly box everything based on their experience with Java, and they didn’t want to template everything based on their experience with C++. Finding and implementing a middle ground is, in fact, difficult. Can you name any languages that avoid the problems described in https://research.swtch.com/generic?

                                                                                                                                            The problem with “lol no generics” is that the word “generics” sweeps the whole elephant under the rug. There are a fractal of decisions to make when designing them with consequences in the type system, runtime, compiler implementation, programmer usability, and more. I can’t think of any languages that have the exact same generics system. Someone who prefers Rust generics can look at any other language and say “lol no traits”, and someone who prefers avoiding generics entirely (which, I promise, is a reasonable position to hold) may look around and say “lol 2 hour compile times”. None of those statements advance any conversation or are a useful way to react.

                                                                                                                                            1. 1

                                                                                                                                              I can’t think of any languages that have the exact same generics system. Someone who prefers Rust generics can look at any other language and say “lol no traits”,

                                                                                                                                              Aren’t Rust traits analogous to Swift protocols?

                                                                                                                                              1. -1

                                                                                                                                                While Go is garbage collected, it is still very much “value oriented” in that it gives control to the programmer for the layout of your memory, much like C++ and Rust.

                                                                                                                                                That kind of changes everything… Oops. (Really, I mean it.)

                                                                                                                                                When they started the language, things like “very fast compile times” where very high priority, with the template system of C++ generics heavily inspiring that goal.

                                                                                                                                                Several things conspire to make C++ compile times slow. The undecidable grammar, the complex templates, and the header files. Sure we have pre-compiled headers, but in general, those are just copy pasta that are being parsed and analysed over and over and over again. I’ve seen bloated header-only libraries add a full second of compilation time per .cpp file. And it was a logging library, so it was included everywhere. Properly isolating it solved the problem, and the overhead was reduce to one second for the whole project.

                                                                                                                                                A simpler grammar is parsed basically instantly. Analysis may be slower depending on how advanced static checks are, but at least in a reasonable language it only has to happen once. Finally there’s code generation for the various instantiations, and that may take some time if you have many types to instantiate. But at least you don’t have to repeat the analysis, and the most efficient optimisations don’t take that much compilation time anyway.

                                                                                                                                                1. 3

                                                                                                                                                  The undecidable grammar, the complex templates, and the header files. Sure we have pre-compiled headers, but in general, those are just copy pasta that are being parsed and analysed over and over and over again.

                                                                                                                                                  The parsing and analysis isn’t the whole problem. The C++ compilation model is an incremental evolution of Mary Allen Wilkes’ design from the ’70s, which was specifically designed to allow compiling complex problems on machines with around 2 KiB of memory. Each file is compiled separately and then pasted together in a completely separate link phase. In C++, inline and templated functions (including methods on templated classes) are emitted in every compilation unit that uses them. If 100 files use std::vector<int>::push_back then 100 instances of the compiler will create that template instantiation (including semantic analysis), generate IR for it, optimise it, and (if they still have calls to it left after inlining) spit out a copy of it in a COMDAT in the final binary. It will then be discarded at the end.

                                                                                                                                                  Sony has done some great work on a thing that they call a ‘compilation database’ to address this. In their model, when clang sees a request for std::vector<int>::push_back, it queries a central service to see if it’s already been generated. It can skip the AST generation and pull the IR straight from the service. Optimisers can then ignore this function except for inlining (and can provide partially optimised versions to the database). A single instance is emitted in the back end. This gives big compile time speedups, without redesigning the language.

                                                                                                                                                  It’s a shame that the Rust developers didn’t build on this model. Rust has a compilation model that’s more amenable to this kind of (potentially distributed) caching than C++.

                                                                                                                                              2. 1

                                                                                                                                                What’s clear to me is that Go’s designers didn’t read Pierce’s Type and Programming Languages,

                                                                                                                                                I’ll just leave this here https://www.research.ed.ac.uk/en/publications/featherweight-go

                                                                                                                                                1. 0

                                                                                                                                                  I meant back when Go first came out. That was over 12 years ago, in 2009. This paper is from 2020.

                                                                                                                                                  Nevertheless, the present thread significantly lowered my confidence in that claim. I am no longer certain Go designers failed to read Pierce’s work or equivalent, I now merely find it quite plausible.

                                                                                                                                                2. 1

                                                                                                                                                  What’s clear to me is that Go’s designers didn’t read Pierce’s Type and Programming Languages . . .

                                                                                                                                                  Do you really think that Ken Thompson, Rob Pike, and Robert Greisemer were ignorant to this degree? That they made the design decisions they did based on a lack of theoretical knowledge?

                                                                                                                                                  1. 2

                                                                                                                                                    None of them is known for statically typed functional languages, and I know for a fact there is little cross talk between that community and the rest of the world. See Java, designed in 1995, twenty years after ML showed the world not only how to do generics, but how neat sum types are. Yet Java’s designers chose to have null instead, and generics came only years later. Now I kinda forgive them for not including generics at a time they likely believed class based polymorphism would replace parametric polymorphism (a.k.a. generics), but come on, adding null when we have a 20 year old better alternative out there?

                                                                                                                                                    So yeah, ignorance is not such an outlandish hypotheses, even with such people. (Edit: apparently one of them did read TAPL, so that should falsify the ignorance hypothesis after all.)

                                                                                                                                                    But that’s not my only hypothesis. Another possibility is contempt for their users. In their quest for simplicity, they may have thought the brains of Go programmers would be too weak to behold the frightening glory of generics. That instead they’d stay in the shelter of more familiar languages like Python or C. I’m not sure how right they may have been on that one to be honest. There are so many people that don’t see the obvious truth that programming is a form of applied maths (some of them explicitly fled maths), that I can understand they may panic at the first sight of an unspecified type. But come on, we don’t have to use generics just because they’re there. There’s no observable difference between a built in map and one that uses generics. Users can use the language now, and learn generics later. See how many people use C++’s STL without knowing the first thing about templates.

                                                                                                                                                    Yet another hypothesis is that they were in a real hurry, JavaScript style, and instead of admitting they were rushed, they rationalised the lack of generics like it was a conscious decision. Perhaps they were even told by management to not admit to any mistake or unfinished job.

                                                                                                                                                    1. 6

                                                                                                                                                      But that’s not my only hypothesis. Another possibility is contempt for their users. … Yet another hypothesis is that they were in a real hurry, JavaScript style, and instead of admitting they were rushed, they rationalised the lack of generics like it was a conscious decision. Perhaps they were even told by management to not admit to any mistake or unfinished job.

                                                                                                                                                      This is exhausting and frustrating. It’s my own fault for reading this far, but you should really aim for more charity when you interpret others.

                                                                                                                                                      1. 3

                                                                                                                                                        In the words of Rob Pike himself:

                                                                                                                                                        The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                                                                                        I’d say that makes it quite clear.

                                                                                                                                                        1. 0

                                                                                                                                                          Ah, I didn’t remember that quote, thank you. That makes the contempt hypothesis much more plausible.

                                                                                                                                                          That being said, there’s a simple question of fact that is very difficult to ascertain: what the “average programmer” is capable of understanding and using, and at what cost? I personally have a strong intuition that generics don’t introduce unavoidable complexity significant enough to make people lives harder, but I’m hardly aware of any scientific evidence to that effect.

                                                                                                                                                          We need psychologists and sociologists to study is.

                                                                                                                                                        2. 1

                                                                                                                                                          I’ve ran out of charitable interpretations to be honest. Go designers made a mistake, plain and simple. And now that generics have been added, that mistake has mostly been fixed.

                                                                                                                                                          1. 1

                                                                                                                                                            I’m surprised you’re saying this after noticing that you didn’t know basic things about Go’s type system and implementation and learning those details “changes everything” (which, in my opinion, is commendable). Indeed, you’ve also apparently learned many facts about the authors and their history in this thread. Perhaps this is a good moment to be reflective about the arguments you’re presenting, with how much certainty you’re presenting them, and why.

                                                                                                                                                            1. 2

                                                                                                                                                              A couple things:

                                                                                                                                                              • I still think that omitting generics from a somewhat general purpose language past 2005 or so is a mistake. The benefits are just too large.
                                                                                                                                                              • I’ve seen comments about how the standard library itself had to jump through some hoops that wouldn’t be there if Go had generics from the start. So Go authors did have some warning.
                                                                                                                                                              • Go now has generics, even though adding them after the fact is much harder. There can be lots of reasons for this change, but one of them remains an admission of guilt: “oops we should have added generics, here you are now”.

                                                                                                                                                              So yeah, I still believe beyond reasonable doubt that omitting generics back then was a mistake.

                                                                                                                                                        3. 3

                                                                                                                                                          All of your “analyses” are rooted in a presumption of ignorance, or malice, or haughty superiority, or some other bad-faith foundation. Do you really thing that’s the truth of the matter?

                                                                                                                                                          There are so many people that don’t see the obvious truth that programming is a form of applied maths

                                                                                                                                                          Some programming is a form of applied math. Most programming, as measured by the quantity of code which exists and is maintained by human beings, is not. Most programming is the application of computational resources to business problems. It’s imperative, it’s least-common-denominator, and it’s boring.

                                                                                                                                                          1. 1

                                                                                                                                                            All of your “analyses” are rooted in a presumption of ignorance, or malice, or haughty superiority, or some other bad-faith foundation. Do you really thing that’s the truth of the matter?

                                                                                                                                                            The only way it’s false is if omitting generics was the right thing to do. I don’t believe that for a second. It was a mistake, plain and simple. And what could possibly cause mistakes, if not some form of incompetence or malice?

                                                                                                                                                            Most programming is the application of computational resources to business problems. It’s imperative, it’s least-common-denominator, and it’s boring.

                                                                                                                                                            It’s also maths. It’s also the absolutely precise usage of a formal notation that ends up being transformed into precise instructions for an (admittedly huge) finite state machine. Programs are still dependency graphs, whose density is very important for maintainability — even the boring ones.

                                                                                                                                                            It’s not the specific kind of maths you’ve learned in high school, but it remains just as precise. More precise in fact, given how unforgiving computers are.

                                                                                                                                                            1. 2

                                                                                                                                                              The only way it’s false is if omitting generics was the right thing to do. I don’t believe that for a second.

                                                                                                                                                              “The right thing to do” is a boolean outcome of some function. That function doesn’t have a a single objective definition, it’s variadic over context. Can you not conceive of a context in which omitting generics was the right thing to do?

                                                                                                                                                              1. 2

                                                                                                                                                                I see some:

                                                                                                                                                                1. Designing a language before Y2K. Past 2005, it is too easy to know about them to ignore them.
                                                                                                                                                                2. Addressing a specific niche for which generics don’t buy us much.
                                                                                                                                                                3. Generics are too difficult to implement.
                                                                                                                                                                4. Users would be too confused by generics.
                                                                                                                                                                5. Other features incompatible with generics are more important.

                                                                                                                                                                Go was designed too late for (1) to fly with me, and it is too general purpose for (2). I even recall seeing evidence that its standard library would have significantly benefited from generics. I believe Rust and C++ have disproved (3) despite Go using value types extensively. And there’s no way I believe (4), given my experience in OCaml and C++. And dammit, Go did add generics after the fact, which disavows (4), mostly disproves (3), and utterly destroys (5). (And even back then I would have a hard time believing (5), generics are too important in my opinion.)

                                                                                                                                                                So yeah, I can come up with various contexts where omitting generics is the right think to do. What I cannot do is find one that is plausible. If you can, I’m interested.

                                                                                                                                                                1. 1

                                                                                                                                                                  [Go] is too general purpose for (2). I even recall seeing evidence that its standard library would have significantly benefited from generics.

                                                                                                                                                                  You don’t need to speculate about this stuff, the rationale is well-defined and recorded in the historical record. Generics were omitted from the initial release because they didn’t provide value which outweighed the cost of implementation, factoring in overall language design goals, availability of implementors, etc. You can weight those inputs differently than the authors did, and that’s fine. But what you can’t do is claim they were ignorant of the relevant facts.

                                                                                                                                                                  I believe Rust and C++ have disproved (3)

                                                                                                                                                                  A language is designed as a whole system, and its features define a vector-space that’s unique to those features. Details about language L1 don’t prove or disprove anything about language L1. The complexity of a given feature F1 in language L1 is completely unrelated to any property of that feature in L2. So any subjective judgment of Rust has no impact on Go.

                                                                                                                                                                  Go did add generics after the fact, which disavows (4), mostly disproves (3), and utterly destroys (5).

                                                                                                                                                                  Do you just not consider the cost of implementation and impact on the unit whole as part of your analysis? Or do you weight these things so minimally as to render them practically irrelevant?

                                                                                                                                                                  Generics did not materially impact the success of the goals which Go set out to solve initially. Those goals did not include any element of programming language theory, language features, etc., they were explicitly expressed at the level of business objectives.

                                                                                                                                                                  1. 1

                                                                                                                                                                    You don’t need to speculate about this stuff, the rationale is well-defined and recorded in the historical record.

                                                                                                                                                                    What record I have read did not convince me. If you know of a convincing article or discussion thread, I’d like to read it. A video would work too.

                                                                                                                                                                    I believe Rust and C++ have disproved (3)

                                                                                                                                                                    A language is designed as a whole system […]

                                                                                                                                                                    I picked Rust for a specific reason: manual memory management, which means value types everywhere, and the difficulties they imply for generics. That said, I reckon that Go had the additional difficulty of having suptyping. But here’s the thing: in a battle between generics and subtyping, if implementing both is too costly, I personally tend to sacrifice subtyping. In a world of closures, subtyping and suptype polymorphism simply are not needed.

                                                                                                                                                                    Do you just not consider the cost of implementation and impact on the unit whole as part of your analysis?

                                                                                                                                                                    I’m not sure what you mean there… I think pretty much everyone agrees that designing and implementing generics up front is much easier than doing so after the fact, in a system not designed for them. If the Go team/community were able to shoulder the much higher cost of after-the-fact generics, then they almost certainly could have shouldered the cost of up-front generics back then —even though the team was much smaller.

                                                                                                                                                                    Generics did not materially impact the success of the goals which Go set out to solve initially.

                                                                                                                                                                    Well if they just wanted to have a big user base, I agree. The Google brand and the reputation of its designers did most of that work. As for real goals, they’re the same as any language: help the target audience write better programs for cheaper in the target niche. And for this, I have serious doubts about the design of Go.

                                                                                                                                                                    Now as @xigoi pointed out, Go authors targetted noobs. That meant making the language approachable by people who don’t know the relevant theory. That didn’t mean making the language itself dumb. Because users can’t understand your brilliant language doesn’t mean they won’t be able to use it. See every C++ tutorial ever, where you’re introduced to its features bit by bit. For instance when we learn I/O in C++ we don’t get taught about operator overloading (<< and >> magically work on streams, and we don’t need to know why just yet). Likewise we don’t learn template meta programming when we first encounter std::vector.

                                                                                                                                                                    People can work with generics before understanding them. They won’t write generic code just yet, but they absolutely can take advantage of already written code. People can work with algebraic data types. They won’t write those types right away, but they can absolutely take advantage of the option type for the return value of functions that may fail.

                                                                                                                                                                    A language can be brilliant and approachable. Yet Go explicitly chose to be dumb, as if it was the only way to be easy to work with. Here’s the thing though: stuff like the lack of generics and sum types tends to make Go harder to work with. Every time someone needed a generic data structure, they had to sacrifice type safety and resort to various conversion to and from the empty interface. Every time someone needs to report failures to the caller, they ended up returning multiple values, making things not only cumbersome, but also fairly easy to miss —with sum types at least the compiler warns you when you forget a case.

                                                                                                                                                                    It’s all well and good to design a language for other people to use, but did they study the impact of their various decision on their target audience? If I’m writing a language for myself I can at least test it on myself, see what feels good, what errors I make, how fast I program… and most of my arguments will be qualitative. But if I’m writing for someone else, I can’t help but start out with preconceived notions about my users. Maybe even a caricature. At some point we need to test our assumptions.

                                                                                                                                                                    Now I say that, such studies are bloody expensive, so I’m not sure what’s the solution there. When I made my little language, I relied on preconceived notions too. We had the requirements of course, but all I knew wast that my users weren’t programmers by trade. So I made something that tries its best to get out of their way (almost no explicit typing by default), and reports errors early (static typing rules). I guess I got lucky, because I was told later that they were happy with my language (and home grown languages have this reputation for being epically unusable).

                                                                                                                                                                    1. 1

                                                                                                                                                                      But here’s the thing: in a battle between generics and subtyping, if implementing both is too costly, I personally tend to sacrifice subtyping. In a world of closures, subtyping and suptype polymorphism simply are not needed.

                                                                                                                                                                      Do you consider generics and subtyping and polymorphism and other programming language properties means to an end, or ends in themselves?

                                                                                                                                                                      1. 0

                                                                                                                                                                        Of course they’re a means to an end, why do you even ask?

                                                                                                                                                                        In the programs I write, I need inheritance or subtyping maybe once a year. Rarely enough that using closures as a poor’s man classes is adequate. Heck, even writing the odd virtual table in C is enough in practice.

                                                                                                                                                                        Generics however were much more useful to me, for two purposes: first, whenever I write a new data structure or container, it’s nice to have it work on arbitrary data types. For standard libraries it is critical: you’ll need what, arrays & slices, hash tables, maybe a few kind of trees (red/black, AVL…).

                                                                                                                                                                        I don’t write those on a day to day basis, though. I’ve also grown susceptible to Mike Acton’s arguments that being generic often causes more problems than it solves, at least when speed matters. One gotta shape one’s program to one’s data, and that makes generic data structures much less useful.

                                                                                                                                                                        My second purpose is less visible, but even more useful: the right kind of generics help me enforce separation of concerns to prevent bugs. That significantly speeds up my development. See, when a type is generic you can’t assume anything about values of that type. At best you can copy values around. Which is exactly what I’m looking for when I want to isolate myself from that type. That new data structure I’m devising just for a particular type? I’m still going to use generics if I can, because they make sure my data structure code cannot mess with the objects it contains. This drastically reduces the space of possible programs, which is nice when the correct programs are precisely in that space. You can think of it as defining bugs out of existence, like Ousterhout recommends in A Philosophy of Software Design.

                                                                                                                                                                    2. 1

                                                                                                                                                                      A language is designed as a whole system, and its features define a vector-space that’s unique to those features.

                                                                                                                                                                      How do you add two languages, or multiply a language by an element of a field?

                                                                                                                                                  1. 37

                                                                                                                                                    You really do need one, but sadly, you are trying to use a hypertext document technology instead.

                                                                                                                                                    1. 15

                                                                                                                                                      I think one of the things that Go got right is figuring out what the core data types needed for a modern language are. I wrote this in 2013, and I think it holds up:

                                                                                                                                                      The designers of Go agree with the collective experience of the last twenty years of programming that there are three basic data types a modern language needs to provide as built-ins: Unicode strings, variable length arrays (called “slices” in Go), and hash tables (called “maps”). Languages that don’t provide those types at a syntax level cannot be called modern anymore.

                                                                                                                                                      I think a lot of people would like to add Optional/Maybe and sum types to that list of core data types. In any event, you’re going to get pushback if you try to release a new language without an easy way to create those types.

                                                                                                                                                      1. 1

                                                                                                                                                        Languages that don’t provide those types at a syntax level

                                                                                                                                                        What’s wrong with providing them in the standard library, without special syntax?

                                                                                                                                                        1. 1

                                                                                                                                                          Even C has special syntax for strings and arrays (considering that strings in C are just 0-terminated character arrays, it could have done without string literals). I think Kotlin has it only for strings, but it makes the code more verbose.

                                                                                                                                                          1. 1

                                                                                                                                                            You could provide hash tables via the standard library, I guess, but I don’t see how you could possible do strings or arrays without some special syntax.

                                                                                                                                                        1. 30

                                                                                                                                                          What this article fails to mention is that none of the popups demonstrated are necessary, and hence of dubious legality. Rather than design a clear cookie consent banner, just defer displaying it until it is necessary. And no, your Google AdWords cookie is not necessary.

                                                                                                                                                          It is also ironic that the article itself is covered by a floating banner:

                                                                                                                                                          To make Medium work, we log user data. By using Medium, you agree to our Privacy Policy, including cookie policy.

                                                                                                                                                          1. 23

                                                                                                                                                            Plus the idea that the reason these bad dialogs exist because no one’s designed a better one is just … hopelessly misguided. Offering a better alternative won’t make sites switch to it, because they’re doing what they’re doing now because it’s bad, not because it’s good.

                                                                                                                                                            1. 2

                                                                                                                                                              Yes. It’s basically a form of civil disobedience.

                                                                                                                                                              Basically operating on game theory, hoping the other sites don’t break rank

                                                                                                                                                              1. 3

                                                                                                                                                                It’s basically a form of civil disobedience.

                                                                                                                                                                Uhhhh… that’s an odd analogy.

                                                                                                                                                                Civil disobedience is what you do when the law is unjust or immoral; this is more like “the law doesn’t allow us to be as profitable as we would like so we are going to ignore it.”

                                                                                                                                                                1. 1

                                                                                                                                                                  Civil disobedience doesn’t have to be ethically or morally just.

                                                                                                                                                                  civil disobedience, also called passive resistance, the refusal to obey the demands or commands of a government or occupying power, without resorting to violence or active measures of opposition; its usual purpose is to force concessions from the government or occupying power.

                                                                                                                                                                  1. 1

                                                                                                                                                                    Hm; never thought about it that way but fair point! It feels a bit off to compare corporate greed with like … Gandhi, but technically it fits.

                                                                                                                                                            2. 7

                                                                                                                                                              Yeah I find publishing an article like this through medium a bit ironic and hypocritical.

                                                                                                                                                            1. 5

                                                                                                                                                              This sounds more like an anti-Java rant than an anti-OOP rant, and ten years late to the party at that.

                                                                                                                                                              And claiming that Go somehow isn’t object-oriented is just silly.

                                                                                                                                                              1. 1

                                                                                                                                                                I remember Don Melton bitching about how hard it was to define the initial UA string for Safari when it launched in 2002. Lots of little tweaks, each of which made some important must-support websites work correctly but broke others.

                                                                                                                                                                (BTW I doubt the article’s claim that terminal-mode browsers like Lynx had significant effect on Web adoption. Not everyone had a Web-capable GUI in the mid-90s, but the people who didn’t were overwhelmingly running Windows 3.1 or DOS, and I don’t remember there being a version of Lynx for DOS. Besides, the tech-savvy early adopters who actually got online on the WWW that early were also the same early adopters of Windows 95. Meanwhile, the people in academia mostly had their Sun/Apollo/SGI/NeXT/whatever workstations running X.)

                                                                                                                                                                1. 1

                                                                                                                                                                  You didn’t need a Lynx for MS-DOS. A lot of people were using shell accounts in those days, running Lynx on their ISP’s or university’s host.

                                                                                                                                                                1. 5

                                                                                                                                                                  I agree with some criticisms. Some are just subjective opinions. Others I think are wrong or missing the big picture.

                                                                                                                                                                  Here we go:

                                                                                                                                                                  The loop iteration variable is reused across iterations

                                                                                                                                                                  Yes, this is bad. I hope it can be fixed by using the Go module version pragma.

                                                                                                                                                                  defer inside a block executes not at the end of the block, but at the end of the enclosing function.

                                                                                                                                                                  Sort of subjective. I can see the argument for either way.

                                                                                                                                                                  defer evaluates sub-expressions eagerly.

                                                                                                                                                                  I disagree with this as a criticism. You want it to evaluate the arguments eagerly because you might change the arguments later in the function! It would be a pain if it didn’t work like this.

                                                                                                                                                                  Errors on unused variables are annoying as well.

                                                                                                                                                                  I can’t say that I run into this very often. Typically if my code is complete enough to test, the unused variables are gone. I guess I’m just used to doing the _ = x dance now, so just like how you eventually stop being annoyed by forgetting semicolons in languages that require them, I just don’t notice anymore.

                                                                                                                                                                  Not everything that should be done needs to be done right here right now.

                                                                                                                                                                  That’s fair. I think the Go authors just really didn’t want people to check in code that will get optimized “later” (meaning never), but opinions can vary on this.

                                                                                                                                                                  Cannot make a type in a foreign package implement an interface

                                                                                                                                                                  I don’t agree with this at all. Allowing something outside a package to modify it is the road to madness. It would never fit with Go.

                                                                                                                                                                  No sum types with exhaustive pattern matching

                                                                                                                                                                  I sort of wish there were sum types (hopefully, now that type constraints can be sum types, they will allow interface sum types too), but I don’t really see why people care about exhaustive matching. Is this really a problem? Like the thing where C-like languages will fallthrough switch statements by default—that is a real problem that really causes bugs in production. Is exhaustiveness like that? I can’t say I’ve seen a bug in the wild caused by an inexhaustive switch.

                                                                                                                                                                  No overloading for common operations

                                                                                                                                                                  This is pretty subjective. If you could make library authors pinky swear not to abuse it, this would be good, but in a certain unnamed language, for example, they made the frigging bitshift operator into a pipe redirection (!), so…

                                                                                                                                                                  No standard set type

                                                                                                                                                                  There’s a plan to fix this soon.

                                                                                                                                                                  No anonymous interface compositions

                                                                                                                                                                  As the correction says, this is wrong.

                                                                                                                                                                  Naming conventions

                                                                                                                                                                  These are also super-subjective. The tar thing seems like an unfortunate inconsistency. The others I don’t really care about.

                                                                                                                                                                  Odd choice of terminology

                                                                                                                                                                  You can file a docs change for the Unicode one. The others I don’t care about.

                                                                                                                                                                  Struct layout is based on declaration order

                                                                                                                                                                  I can see both sides of this. If it were automatic, you’d need a way to override it and make it manual for C interop and extreme optimization, etc.

                                                                                                                                                                  Poor compiler diagnostics

                                                                                                                                                                  Sure. Room for improvement.

                                                                                                                                                                  Odd doc conventions

                                                                                                                                                                  Very subjective. I’m used to the conventions, but yeah, if the convention didn’t exist, probably things would be fine without it.

                                                                                                                                                                  Limited markup support in godoc

                                                                                                                                                                  I wouldn’t put this in my top list of complaints. Seems fine and improvements are planned. I like that it actually has a built in docs system, unlike many other languages!

                                                                                                                                                                  Documentation in some cases is not well-organized

                                                                                                                                                                  Sure, but there’s always room for better docs. I’d say overall they’re a good reference.

                                                                                                                                                                  Struct initialization syntax

                                                                                                                                                                  Is this even a complaint? What’s the alternative here? Most Go linters will complain if you try to initialize an external struct without using the named variant.

                                                                                                                                                                  Pre-main initialization and global mutable state are common

                                                                                                                                                                  Some of this I agree with and some of this just sounds like bad design by your coworker. I think in general Go’s standard library does too much pre-main initialization, but it is nice to be able to do some pre-initialization, and so you just have to use it responsibly.

                                                                                                                                                                  Conflating useful default values with useful zero values

                                                                                                                                                                  1. This is definitely an improvement from C having undefined values. 2. It might be nice if there were some way to specify defaults, but I can also see the case for keeping it simple and making everything default to zero. It just depends on how much you prefer simplicity here.

                                                                                                                                                                  Additionally, if you accidentally add struct tags to private fields (incorrectly assuming that is enough to serialize/deserialize them), you will silently get the wrong result (with zero values) instead of a runtime error

                                                                                                                                                                  The linter I used has stopped me from making this mistake before.

                                                                                                                                                                  nil is sometimes equivalent to an empty collection but sometimes causes a runtime crash.

                                                                                                                                                                  This isn’t my biggest complaint with nil! Nil is used for different types where it behaves differently. The article cites slice vs. map, but pointer vs. interface is also confusing. Really they should have just had multiple names for the different kinds of zeros and then a single “universal” zero as well.

                                                                                                                                                                  Substitution

                                                                                                                                                                  There’s an open issue about this. Long story short, the current design is the way it is for a reason, but maybe they will add an easier way to make pointers in the future. For now you can use a one line generic function: func ref[T any](val T) *T { return &val }.

                                                                                                                                                                  Literals

                                                                                                                                                                  There have been open issues to change this. I don’t remember what, but there was some reason it wasn’t changed.

                                                                                                                                                                  make

                                                                                                                                                                  I’m not sure what’s being proposed here. I personally tend to do s := make([]int, 0, len(other); s = append(s, something) to work around the bug mentioned. The new generic slices package will make some of this simpler.

                                                                                                                                                                  Public identifiers in tests are not available to other tests

                                                                                                                                                                  I can’t say as I’ve ever noticed this.

                                                                                                                                                                  fmt.Sprintf will happily insert (MISSING) if a format specifier (such as %s) does not have a corresponding argument.

                                                                                                                                                                  I can see the case for a panic here. OTOH, people don’t like panics.

                                                                                                                                                                  Sends and receives to a nil channel block forever.

                                                                                                                                                                  This one is just wrong. Nil channel blocking forever is a feature I use all the time. Otherwise you would need to duplicate you select blocks. If you think this is bad, you haven’t figured out channels yet.

                                                                                                                                                                  Language simplicity is put on a pedestal

                                                                                                                                                                  Yes, and that’s why I like Go. :-)

                                                                                                                                                                  1. 1

                                                                                                                                                                    Very little of this sounds very simple unless you’re the one implements the language.

                                                                                                                                                                  1. 24

                                                                                                                                                                    Am I the only one being completely tired of these rants/language flamewars? Just use whatever works for you, who cares

                                                                                                                                                                    1. 11

                                                                                                                                                                      You’re welcome to use whatever language you like, but others (e.g. me) do want to see debates on programming language design, and watch the field advance.

                                                                                                                                                                      1. 6

                                                                                                                                                                        Do debates in blogs and internets comments meaningfully advance language design compared to, say, researchers and engineers exploring and experimenting and holding conferences and publishing their findings? I think @thiht was talking about the former.

                                                                                                                                                                        1. 2

                                                                                                                                                                          I’m idling in at least four IRC channels on Libera Chat right now with researchers who regularly publish. Two of those channels are dedicated to programming language theory, design, and implementation. One of these channels is regularly filled with the sort of aggressive discussion that folks are tired of reading. I don’t know whether the flamewars help advance the state of the art, but they seem to be common among some research communities.

                                                                                                                                                                          1. 5

                                                                                                                                                                            Do you find that the researchers, who publish, are partaking in the aggressive discussions? I used to hang out in a couple Plan 9/9front-related channels, and something interesting I noticed is that among the small percentage of people there who made regular contributions (by which I mean code) to 9front, they participated in aggressive, flamey discussion less often than those that didn’t make contributions, and the one who seemed to contribute the most to 9front was also one of the most level-headed people there.

                                                                                                                                                                            1. 2

                                                                                                                                                                              It’s been a while since I’ve been in academia (I was focusing on the intersection of PLT and networking), and when I was there none of the researchers bothered with this sort of quotidian language politics. Most of them were focused around the languages/concepts/papers they were working with and many of them didn’t actually use their languages/ideas in real-world situations (nor should they, the job of a researcher is to research not to engineer.) There was plenty of drama in academia but not about who was using which programming language. It had more to do with grant applications and conference politics. I remember only encountering this sort of angryposting about programming languages in online non-academic discussions on PLT.

                                                                                                                                                                              Now this may have changed. I haven’t been in academia in about a decade now. The lines between “researcher” and “practitioner” may have even become more porous. But I found academics much more focused on the task at hand than the culture around programming languages among non-academics. To some extent academics can’t be too critical because the creator of an academic language may be a reviewer for an academic’s paper submission at a conference.

                                                                                                                                                                              1. 2

                                                                                                                                                                                I’d say that about half of the aggressive folks have published programming languages or PLT/PLD research. I know what you’re saying — the empty cans rattle the most.

                                                                                                                                                                        2. 8

                                                                                                                                                                          You are definitely not the only one. The hide button is our friend.

                                                                                                                                                                          1. 2

                                                                                                                                                                            So I was initially keen on Go when it first came out. But have since switched to Rust for a number of different reasons, correctness and elegance among them.

                                                                                                                                                                            But I don’t ever say “you shouldn’t use X” (where ‘X’ is Go, Java, etc.). I think it is best to promote neat projects in my favorite language. Or spending a little time to write more introductory material to make it easier for people interested to get started in Rust.

                                                                                                                                                                            1. 2

                                                                                                                                                                              I would go further, filtering for rant, meta and law makes Lobsters much better.

                                                                                                                                                                              rant is basically the community saying an article is just flamebait, but short of outright removing it. You can choose to remove it.

                                                                                                                                                                            2. 5

                                                                                                                                                                              I think this debate is still meaningful because we cannot always decide what we use.

                                                                                                                                                                              If there are technical or institutional barriers, you can ignore $LANG, such as if you’re writing Android apps, where you will use a JVM language (either Kotlin or Java) but if you are writing backend services, outside forces may compel you to adopt Go, despite its shortcomings detailed in this post (and others by the author).

                                                                                                                                                                              Every post of this kind helps those who find themselves facing a future where they must write Go to articulate their misgivings.

                                                                                                                                                                            1. 4

                                                                                                                                                                              The fallacy here is that because it is impossible to solve everything, we shouldn’t even attempt to solve some of it.

                                                                                                                                                                              This sounds conspicuously close to the Unix philosophy, and considering Go’s heritage, it doesn’t come as a surprise.

                                                                                                                                                                              1. 3

                                                                                                                                                                                In Swift, this would be:

                                                                                                                                                                                struct Test {
                                                                                                                                                                                    let `in`: String
                                                                                                                                                                                }
                                                                                                                                                                                
                                                                                                                                                                                let a = Test(in: "asdf")
                                                                                                                                                                                

                                                                                                                                                                                I think the example doesn’t really show the point of r#. You could just as well just change the name to _in or __in instead and it would probably be more readable than r#in.

                                                                                                                                                                                1. 3

                                                                                                                                                                                  As always, the RFC gives a lot of motivation. https://rust-lang.github.io/rfcs/2151-raw-identifiers.html

                                                                                                                                                                                  (E.g. the ability to name a function like a keyword, particularly useful for FFI use)

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    But if you always call it using r#, you have essentially renamed the function. It would be acceptable if it was only at declaration or where disambiguation was otherwise needed, but here it seems to surface at every point of use.

                                                                                                                                                                                    1. 4

                                                                                                                                                                                      Imagine function:

                                                                                                                                                                                      extern "C" r#match() {
                                                                                                                                                                                      
                                                                                                                                                                                      }
                                                                                                                                                                                      

                                                                                                                                                                                      bc. a dynamic library needs to export this symbol. I agree in general, r# is not to be used in interfaces intended for humans.

                                                                                                                                                                                      1. 6

                                                                                                                                                                                        Hm, I don’t think that’s what is happening here. For FFI purposes, we have a dedicated attribute, link_name

                                                                                                                                                                                        https://doc.rust-lang.org/reference/items/external-blocks.html#the-link_name-attribute

                                                                                                                                                                                        Unlike r#, it’s not restricted to valid rust identifiers (ie, it allows weird symbols in name).

                                                                                                                                                                                        My understanding that 90% of the motivation for r# was edition system, and desire to re-purpose existing idents as keywords. Hence, unlike Swift or Kotlin, Rust deliberately doesn’t support arbitrary strings as raw identifiers, only stuff which is lexically an ident ((_|XID_Start)XID_Continue*).

                                                                                                                                                                                  2. 3

                                                                                                                                                                                    The example uses debug serialisation (#[derive(Debug)]), which perhaps isn’t the best example of why it matters, but at least proves the point.

                                                                                                                                                                                    The name matters in serialisation, and this could be generated code. I’ve had this exact problem in two unrelated protocol generators that happened to generate C++, and got funny build errors when I tried to define messages with fields like delete and static.

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      OK, but that option hasn’t gone anywhere. You can still name it _in if you want. There’s plenty of niche cases where it would be nice to keep the identifier, mostly when interfacing with code you don’t control.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        Yes, exactly, I figured out about raw identifier while checking a PR at sqlparser crate. where author used in for parsing one of the statements.