Threads for matt

  1. 2

    By my definition, Turbolinks is an SPA framework, even if, as a framework user, you never have to dirty your hands touching any JavaScript. If it has a client-side router, it’s an SPA.

    But… turbolinks does not have a client-side router?

    1. 2

      That last sentence implies that an app with a client-side router must be a SPA, but it does not imply the inverse, that all SPAs have a client-side router.

      While Turbolinks doesn’t have a client-side router, I would say it does client-side routing in the sense that it calls history.pushState().

      1. 2

        Well, the previous context in the blog makes it more clear that this isn’t just an “all Athenians are Plato” reading:

        To me, an SPA is simply a “Single-Page App,” i.e. a website with a client-side router, where every navigation stays on the same HTML page rather than loading a new one. That’s it.

        Neither of these things are particularly true of turbolinks (or pjax apps like GitHub): there’s no client side router and while, in the presence of JS some navigation will not trigger a new page load, this is merely a progressive enhancement. Every navigation can and will happily load a different page if turbolinks is disabled or if it just decides it’s not worth trying to restructure the DOM. These are really MPAs that have adopted a partial illusion of SPA-ness as a progressive enhancement optimization (and whether that’s even a speed up anymore is certainly a question).

        While Turbolinks doesn’t have a client-side router, I would say it does client-side routing in the sense that it calls history.pushState().

        Eh, that’s a really big stretch, and not what I think anybody means by “client-side routing”. Lots of old school multi-page jQuery sites also made extensive use of PushState for odd things they were doing, but nobody considers them “client-side routed” or SPAs unless were stretching those terms beyond all normal and useful meaning just to win an argument.

        Anyways, regardless, if the client is ignorant of routing and all of the routing happens server side, and the server responds to user agents asking for different routes with many distinct and fully formed HTML pages corresponding to the requested route because there’s no such thing as multiple client-side “pages” that have no server-side “page” equivalent, calling what that server is emitting a “Single Page Application” seems like a really bad misapplication of terminology, to me.

        1. 3

          Maybe I’m playing fast and loose with the phrase “client-side router,” but yeah, I basically mean it’s calling history.pushState(). There are always gray zones, though; you’re right. (E.g. an MPA that only uses pushState() in one little place, to pop open a modal or something.)

          1.  

            My current (internal, work) app is definitely in what you’d consider the gray zone. Major sections of the site or anything starting a new workflow are full page loads, but within a common workflow, I use HTMX to replace page sections if available, and pushState() to the URL it would have gone to if JS was disabled. The same goes for things like sorting and paging search results: search result pages get URLs so they can be bookmarked or emailed, and this includes the paging and sorting.

            I do feel like I’m not really doing client-side routing, though. Every link does in a broad sense the same thing whether it’s HTMX-enhanced or not, using the same server-side routes, which generate a full view or a partial view depending on how they were fetched.

            (Note: I’m the original author of brutaldon, which also worked this way, using intercooler.js. But I do count Pinafore as the only SPA I’ve ever truly enjoyed.)

    1. 10

      :( this looks too much like vs code to me

      1. 6

        Presumably intentionally so; VSCode w/ language-server driven intellisense has taken over the editor market in a big way, and I gather they’re trying to make the sales pitch of “pay us money for our product instead” a little easier by softening the UI changes for anyone who they can convince to switch.

        Not ideal for people who love the current JetBrains IDE UI paradigm, I suppose.

        1. 5

          I feel like that goes both ways though. If there’s less difference, why am I paying?

          I am not an absolutionist in terms of redesigns, but as someone who likes using Jetbrains software for working in Enterprise Software, moving towards this “you can barely have two files open at once” design makes me a bit sad (I’m 100% sure I can mess around with the configuration/layout to get what I want of course)

          1. 8

            I feel like that goes both ways though. If there’s less difference, why am I paying?

            I mean it’s hard to say (I’m neither a vscode nor much of a jetbrains guy), but in my limited experience with CLion I walked away with the impression that one buys it for the features (some of which are advantages over rust-analyzer), and merely tolerates the UI. Not that there’s anything completely wrong with the UI, but the current mostly bog-standard “bag of icons” IDE paradigm can’t really be selling many licenses on its strength alone, can it?

            “We’ve got these features but a UI familiar to you” is probably an easier sell than “we’ve got these features and a UI familiar to a VB6 dev” to the newer generation of JS/python/Ruby/Go devs they’re increasingly trying to target, who’ve mostly come up on the textmate/sublime/atom/vscode evolution of editors, is my basic read. The Visual Studio proper / Eclipse school of IDE design is probably slowly going away as it’s becoming increasingly unfamiliar to younger devs.

            1. 3

              For me, the biggest value of JetBrains is their unified framework for parsing and manipulating code, which allows for a reasonably consistent experience across languages in their IDEs. For single-language usage, I‘m not that sure.

              1. 1

                I feel like that goes both ways though. If there’s less difference, why am I paying?

                I’ve jumped around between a number of editors and IDEs for the last.. decade and a half.. I’ve never seen one that is both cross platform, and has the same level of code intelligence as IDEA/derivatives.

                I’m not talking about basic auto-complete of function/method names/parameters.

                If your codebase is typed (even e.g php runtime typing works), IDEA can make refactoring code a lot less tedious.

                Rename a method? Add a non-optional argument? Move a method to a different class? Move a class to a different namespace? Rename an entire namespace? Consolidate hard coded strings into constants? Inline functions/methods, variable/const references, etc? Yep yep yep yep yep. And that’s just considering “regular” languages. Want to hook up a connection to an SQL DB and write raw queries against it? No worries, it’ll give you code intel against the DB schema on-the-fly.

                Personally, it doesn’t help that the current crop of popular non-IDE “editors” are basically all electron based - if I’m going to need a couple of gig of memory just for the IDE, I damn well want it to do more than be a glorified text editor in a web view.

                This is part of why I’m quite looking forward to Fleet - sometimes I don’t need all that refactoring support (i.e. writing markdown docs) and a lightweight mode would be nice, and the ability to have cross-arch workflows (i.e. run fleet backend on an intel machine for some particular project, use an arm machine as my workstation) will be quite interesting too.

          1. 1

            It’s weird that the first thing you criticize in a Critical Retrospective is something syntax-related that you yourself call superficial. It makes it hard to take the rest of the post seriously

            1. 29

              If syntax impacts understandability, is it actually superficial?

              1. 19

                Because I don’t think that’s the fault of the syntax. Huge part of criticism is expectations/preferences and lack of understanding of the trade-offs that made it the way it is. When Rust is different than whatever other language someone is used to, they compare familiar with unfamiliar (see Stroustrup’s Rule). But it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

                People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

                For example, <> for generics is an eyesore. But if Rust used [] for generics, it’d make array syntax either ambiguous (objectively a big problem) or seem pointlessly weird to anyone used to C-family languages. Whatever else you pick is either ambiguous, clashes with meaning in other languages, or isn’t available in all keyboard layouts.

                The closure syntax || expr may seem like line noise, but in practice it’s important for closures to be easy to write and make it easy to focus on their body. JS went from function { return expr } to () => expr. Double arrow closures aren’t objectively better, and JS users criticize them too. A real serious failure of Rust regarding closures is that they have lifetime elision rules surprisingly different than standalone functions, and that is a problem deeper than the syntax.

                Rust initially didn’t have the ? shortcut for if err != nil { return nil, err } pattern, and it had a problem of a low signal-to-noise ratio. Rust then tried removing boilerplate with a try!() macro, but it worked poorly with chains of fallible function calls (you’d have a line starting with try!(try!(try!(… and then have to figure out where each of them have the other paren). Syntax has lots of trade-offs, and even if the current one isn’t ideal in all aspects, it doesn’t mean alternatives would be better.

                And there are lots of things that Rust got right about the syntax. if doesn’t have a “goto fail” problem. Function definitions are greppable. Syntax of nested types is easy to follow, especially compared to C’s “spiral rule” types.

                1. 14

                  I think a lot of criticism about syntax is oblique. People complain about “syntax” because it’s just… the most convenient way to express “I find it hard to learn how to write correct programs, and I find it hard to interpret written programs, even after substantial practice”.

                  Lots of people complain that Common Lisp syntax is hard. Lisp syntax is so easy that you can write a parser in a few dozen lines. Common Lisp has a few extra things but, realistically, the syntax is absolutely trivial. But reading programs written in it is not, even after substantial practice, and I get that (as in, I like Common Lisp, and I have the practice, and I get that).

                  Same thing here. A lot of thought went into Rust’s syntax, probably more than in, say, C’s syntax, if only because there was a lot more prior art for Rust to consider. So there’s probably not much that can be done to improve Rust’s syntax while not basically inventing another language. That doesn’t take away from the fact that the language is huge, so it has a syntax that’s unambiguous and efficient but also huge, so it’s just a whole lot of it to learn and keep in your head at once. I get it, I’ve been writing Rust on and off but pretty much weekly for more than an year now and I still regularly need to go back to the book when reading existing code. Hell, I still need it when reading existing code that I wrote. You pay a cognitive price for that.

                  1. 3

                    “I find it hard to learn how to write correct programs . . .

                    Do you believe “correctness” is a boolean property of a program?

                    1. 1

                      I do, as in, I think you can always procure a “correctness oracle” that will tell you if a program’s output is the correct one and which, given a log of the program’s operations, can even tell you if the method through which it achieved the result is the correct one (so it can distinguish between correct code and buggy code that happens to produce correct output). That oracle can be the person writing the program or, in commercial settings, a product manager or even a collective – a focus group, for example. However:

                      • That oracle works by decree. Not everyone may agree with its edicts, especially with user-facing software. IMHO that’s inherent to producing things according to man-made specs. There’s always an “objective” test to the correctness of physics simulation programs, for example, but the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.
                      • The oracle’s answer may not be immediately comprehensible, and they are not necessarily repeatable (like the Oracle in Delphi, it’s probably best to consider the fact that its answers do come from someone who’s high as a kite). IMHO that’s because not all the factors that determine a program’s correctness are inherent to the program’s source code, and presumably, some of them may even escape our quantitative grasp (e.g. “that warm fuzzy feeling” in games). Consequently, not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                      More to the point, I think it’s always possible to say if something is a bug or a feature, yes :-D.

                      1. 1

                        Wow! I guess I can just say that I wish I worked in your domain! 😉 I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part. Almost all of my programs have been approximate models of under-specified concepts that can change at the whims of their stakeholders. Or, as you say it,

                        the correctness of a billing program is obviously tied to whatever the person in charge of billings thinks is correct.

                        Exactly!

                        not all the knowledge that determines if a program is correct may reside with the programmer at the time of writing the code.

                        In my experience it rarely exists anywhere! Not in one person, or many, or even conceptually.

                        1. 1

                          I can’t think of more than a handful of programs I’ve written in my entire life which have a well-defined notion of correct, even in part.

                          Oh, don’t get me wrong – that describes most of the code I wrote, too, even some of the code for embedded systems :-D. It may well be the case that, for many programs, the “correct” way to do it currently escapes everyone (heh, web browsers, for example…) But I am content with a more restricted definition of correctness that embraces all this arbitrariness.

                    2. 2

                      Well, there was a lot of prior art even when C was created, and they actively chose to disregard it. They also chose to disregard discoveries in C itself in the 70s and 80s, freezing the language far too early considering the impact it would have in the following decades.

                    3. 4

                      it’s like saying the Korean alphabet is unreadable, because you can’t read any of it.

                      But like there is a reasonably objective language difficulty ranking index (from the perspective of English-native speakers) and Korean is squarely in the most-difficult tranche, I guess in no small part due to its complex symbology, at least in comparison to Roman alphabets. Are you saying that this dimension of complexity is, net, irrelevant?

                      1. 11

                        Korean uses the Hangul alphabet, which is very easy to learn. It’s much simpler than our alphabet. You can learn Hangul in a day or two. You’re thinking of Japanese, which is a nightmare based on people writing Chinese characters in cursive and italics while drinking a bottle of sake.

                        1. 1

                          some simplified hanzi does look like kanji, but I would appreciate an example of a japanese character looking like a cursive or italic version of a chinese glyph before I go on to tell your analogy to everyone at parties.

                          1. 1

                            It’s not an analogy. It’s the historical truth of kana: https://en.wikipedia.org/wiki/Kana. Japanese kanji and hanzi are mostly the same modulo some font changes and simplification in the 20th c.

                            1. 1

                              I meant the drunken japanese people part.

                              1. 3

                                We can’t prove they weren’t drunk. :-)

                        2. 11

                          from the perspective of English-native speakers

                          I think that’s what they were getting at; there’s nothing inherently difficult about it but your background as an English speaker makes it look hard to read when objectively speaking it’s dramatically simpler than English due to its regularity and internal logic.

                          1. 2

                            I guess I would say that there is no “objectively speaking” in this domain? Like, there is no superhuman who can look at things invariant of a language background.

                            1. 3

                              If you’re talking about “easy to learn” then I agree.

                              If you’re talking about simplicity, then I disagree. The number of rules, consistency, and prevalence of exceptions can be measured without reference to your background.

                          2. 7

                            I’ve specifically mentioned the Hangul alphabet (a syllabary, strictly speaking), not the language. The Korean language (vocabulary, grammar, spoken communication) may be hard to learn, but the alphabet itself is actually very simple and logical. It’s modern, and it has been specifically designed to be easy to learn and a good fit for the Korean language, rather than being a millennia-old historical borrowed mash-up like many other writing systems.

                            I think it’s a very fitting analogy to having an excellent simple syntax for a complex programming language. You may not understand the syntax/alphabet at all, but it doesn’t mean it’s bad. And the syntax/alphabet may be great, but the language it expresses may still be difficult to learn for other reasons.

                            With Rust I think people complaining about the syntax are shooting the messenger. For example, T: for<'a> Fn(&'a) makes lifetime subtyping contravariant for the loan in the argument of a function item trait in a generic trait bound. Is it really hard because of the syntax? No. Even when it’s expressed in plain English (with no computer language syntax at all) it’s an unintelligible techno-babble you wouldn’t know how to use unless you understand several language features it touches. That for<'a> syntax is obscure even by Rust’s standards, but syntactically it’s not hard. What’s hard is knowing when it needs to be used.

                          3. 4

                            People who don’t like Rust’s syntax usually can’t propose anything better than a bikeshed-level tweak that has other downsides that someone else would equally strongly dislike.

                            The problem with Rust’s syntax isn’t that they made this or that wrong choice for expressing certain features; it’s that there’s simply far too much of it. “Too many notes,” as Joseph II supposedly said.

                            1. 3

                              I agree with this, which is why I object to blaming the syntax for it. For a language that needs to express so many features, Rust’s syntax is doing well.

                              Rust chose to be a language that aims to have strong compile-time safety, low-level control, and nearly zero run-time overhead, while still having higher-level abstractions. Rust could drop a ton of features if it offered less control and/or moved checks to run-time or relaxed safety guarantees, but there are already plenty of languages that do that. Novelty of Rust is in not compromising in any of these, and this came at a cost of having lots of features to control all of these aspects.

                              1. 4

                                You can have many features without a lot of syntax. See Lisp.

                                1. 2

                                  If you pick the feature set for simplicity. Rust had other goals.

                                  1. 4

                                    I literally just said that simple syntax doesn’t necessitate simple features.

                                  2. 2

                                    I think Lisp gets away here only on a technicality. It can still have plenty of obscure constructs to remember, like the CL’s (loop).

                                    The example from the article isn’t really any simpler or more readable if you lispify it:

                                    (try (map-result (static-call (def-lifetime a-heavy (:Trying :to-read a-heavy)) 
                                        syntax (lambda (like) (can_be this maddening))) ()))
                                    

                                    It could be made nicer if it was formatted in multiple lines, but so would the Rust example.

                                  3. 1

                                    I don’t know. I strongly suspect that in the coming years, we will see new languages that offer the same safety guarantees as Rust, also with no runtime, but with syntax that is simpler than Rust. Lately I’ve seen both Vale and Koko exploring this space.

                              2. 12

                                The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it. I was only learning on my own time, and came to the realization I had a long way to go before I’d be able to pick apart a line like the author’s example.

                                So for me, it wasn’t just superficial.

                                1. 5

                                  The syntax complexity of Rust is actually a big factor in why I abandoned my effort to learn it.

                                  Same.

                                2. 3

                                  If syntax impacts understandability, is it actually superficial?

                                  I’d say so.

                                  The problem is that “this syntax is ugly” is a completely subjective judgement largely influenced by the peculiarities of ones’ own background. Coming from Perl and Ruby, I happen to find Rust pleasant to look at and easy to read, whereas I find both Python and Go (which many other people prefer) unreasonably frustrating to read and just generally odd-looking. It’s not that Python and Go are doing anything objectively less understandable, per-se, but they’re certainly have an unfamiliar look, and people react to unfamiliarity as if it were objectively incorrect rather than just, well, making unfamiliar choices with unfamiliar tradeoffs.

                                  It’s pure personal preference, and framing ones’ personal preferences as something that has objective reality outside oneself and which some other party is doing “wrong” is, to me, the definition of a superficial complaint.

                                  1. 8

                                    It’s pure personal preference

                                    Is it pure personal preference? I dunno. Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example. Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words. And so on.

                                    1. 12

                                      Personal preference is a part of it, but I don’t think it’s controversial to say that Python is in general easier to understand than the q language, for example.

                                      Maybe, maybe not. What I do think is that if we’re going to try to make objective claims, we need some real objective measures and measurements. These conversations tend to be nothing but pseudoscience-y assertions and anecdata masquerading as irrefutable facts.

                                      Human cognition and coherence actually abides pretty well-defined rules, at the macro scale. Sigils are harder to grok than words.

                                      (In no way am I trying to pick on you, but) Case in point: “Sigils are harder to grok than words” feels like a strong objective claim but… is this actually in any way true? 馬 is a much more complicated symbol than $ or @ or ->, but we have something like 1.5 billion people in the world happily reading and writing in languages that require a knowledge of thousands of such symbols to achieve literacy, and they turn out to somehow show lower rates of dyslexia than in alphabet based languages while doing so!

                                      Sigil-y writing systems are indeed actually quite common throughout history, so again we have this thing where what feels like a perfectly simple fact actually looks a heck of a lot like a simple case of familiarity when you scratch it just a little bit. The dominance of a few alphabetic writing systems outside of Asia could simply be a historical accident for all we know – there are no strong results from cognitive science supporting any claim that it’s objectively more fit to “human cognition”. We really don’t have any idea whether words are simpler or more efficient than symbols, or whether python is a global maxima of readability, a local minima, or anything in between. There are almost no good studies proving out any of this, just a lot of handwaving and poorly supported claims based on what people happen to like or be most familiar with.

                                      1. 2

                                        馬 is a word. It happens to be written as a single character, but that doesn’t make it punctuation.

                                        1. 2

                                          I’m aware. I speak Japanese.

                                          “Sigil” does not mean “punctuation”. It actually means something like “symbol with occult powers”, but in a programming context I think we can understand it as “symbol that conveys an important functional meaning”, like -> being the symbol meaning “returns a value of the following type”. The point being that OP was being pretty silly when they wrote that it’s a “rule of the human mind” that it’s easier to understand not written out as “not” rather than ! when the existence of a billion plus people using languages with things like “不” at least weakly implies that a single symbol for not is no more mentally taxing to understand.

                                          (that in many programming languages most sigils are punctuation is mostly just an artifact of what’s easy to type on a western keyboard, but it’s by no means the rule. See: APL, which can be chockfull of non-punctuation sigils)

                                          1. 1

                                            The point is that the symbol has a natural pronunciation, which makes it easy to read for a Japanese spsaker. In contrast, when I see !foo or &foo or $foo, my mind just makes an unintelligible noise followed by “foo”, so I have to concentrate on what the symbol means.

                                            1. 1

                                              But these symbols all do have actual pronunciations that are generally specified in the language or are established conventionally, eg) !foo is read “not foo”, &foo is “addressof foo” (at least in C) or “ref foo” in Rust, etc. Good learning resources almost always provide a reading when they introduce the symbol (Blandy et al’s Programming Rust is very good about this, for instance).

                                              Also fwiw not everyone “vocalizes” what they’re reading in their head, that’s actually not a universal thing.

                                        2. 1

                                          When I speak about “understandability” or whatever I’m not making a claim against an abstract Ur-human raised in a vacuum, I’m speaking about humans as they exist today, including cultural and historical influences, and measured on a demographic (macro) scale, rather than an individual (micro) scale. That is, I’m making a descriptive argument, not a normative one. In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                          1. 3

                                            That is, I’m making a descriptive argument, not a normative one.

                                            It’s not a very good descriptive argument, though, insofar as you’re really failing to describe a lot of things in order to make your argument fit the conclusion that “sigils are harder to grok than words”.

                                            Even if we confine ourselves to Western English speakers… what about mathematics? Why does almost everyone prefer y = x+1 to Cobol’s ADD 1 TO X GIVING Y? It’s more familiar, right? There doesn’t seem to be any long-term push to make Mathematics more wordy over time (most of the established symbols have hung around for hundreds of years and had ample opportunity to get out-competed by more grokkable approaches, if word-based approaches were found by people to be any more grokkable), so if we’re describing the long-term pressures on artificial languages I don’t think “sigils are harder to grok than words” is an accurate descriptive statement.

                                            In this context, “familiarity” is I guess a totally reasonable thing to account for! People understand better the things they are familiar with. Right?

                                            Well, sure. But “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words” in any sense, and it leaves a lot more room to talk about sigils in programming languages in a rational way. Things like “dereferencing pointers” aren’t really familiar to anyone in words or sigils, so it’s not obvious to me that x = valueat y is any more or less “correct”/“intuitive”/“grokable” than x = *y.

                                            If anything, given the relative unpopularity of the Pascal/Ada & Cobol language families, a certain amount of “unfamiliar concepts compressed into sigils” seems to be appreciated by programmers at large. But other people disagree, which seems to point at this mostly being a superficial argument over tastes and perhaps how much maths background one has, rather than some kind of concrete and objective variation in any measurable metric of “understandability”.

                                            1. 2

                                              what about mathematics?

                                              Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                              “in some contexts words are more familiar than sigils to western audiences” is a much different claim than “sigils are harder to grok than words”

                                              Not some but most or even almost all, depending on just how many sigils we’re talking about.

                                              Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                              1. 2

                                                Well, I think this substantiates my point? In the sense that way more people can read prose than can understand nontrivial math. Right?

                                                More people can read prose (in general) than the prose portions of an advanced Mathematics text (in specific). It’s not the orthography of mathematics that’s the limiting factor here.

                                                Authors generally don’t invent new languages in order to express their literary works; they take the language(s) they already know, with all their capabilities and constraints, and work within those rules. They do this because their goal is generally not to produce the most precise representation of their vision, but instead to produce something which can be effectively consumed by other humans. The same is true of programming.

                                                Which speaks to my point. Programming uses “sigils” because in many cases these sigils are already familiar to the audience, or are at least no less familiar to the audience for the concepts involved than anything else would be, and audiences seem to show some marked preference for sigils like { … } vs begin … end, y = x + 1 seems pretty definitely preferred for effective consumption by audiences over ADD 1 TO X GIVING Y, etc.

                                                At any rate, we seem to have wandered totally away from “sigils are objectively less readable” and fully into “it’s all about familiarity”, which was my original point.

                                                1. 2

                                                  I’m not claiming that sigils are objectively less readable than prose. I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                                  1. 1

                                                    You’ve made claims that terse syntax impedes comprehension (“Sigils are harder to grok than words”), where the reality is in the “it depends” territory.

                                                    For novices, mathematical notation is cryptic, so they understand prose better. But experts often prefer mathematical notation over prose, because its precision and terseness makes it easier for them to process and manipulate it. This is despite the fact that the notation is objectively terrible in some cases due to its ad-hoc evolution — even where the direction is right, we tend to get details wrong.

                                                    Forms of “compression” for common concepts keep appearing everywhere in human communication (e.g. in spoken languages we have contractions & abbreviations, and keep inventing new words for things instead of describing them using whole phrases), so I don’t think it’s an easy case of “terse bad verbose good”, but a trade-off between unfamiliarity and efficiency of communication.

                                                    1. 1

                                                      I agree with all of your claims here.

                                                    2. 0

                                                      . I’m objecting to the notion that syntax is a superficial aspect of comprehension.

                                                      It’s not fully, but “the * operator should be spelled valueat/ {} should be spelled begin end” stuff is a superficial complaint unless and until we have objective, measurable reasons to favor one syntactical presentation over the other. Otherwise it’s just bikeshedding preferences.

                                                      But I’m sorry, let’s not continue this. I’m not buying the goalpost move here. You wrote that human cognition obeys “well-defined rules. Sigils are harder to grok than words”. That’s pretty obviously a claim that “sigils are objectively less readable than prose” due to these “well defined rules of cognition”. That’s the kind of handwavey, pseudoscience-as-fact discourse I was objecting to and pointing out these discussions are always full of.

                                                      I’ve pointed out that this is, in several ways, basically just a load of hot air inconsistent with any number of things true of humans in general (symbol based writing systems) and western readers in specific.

                                                      Now your “well-defined rules of human cognition which include that sigils are less readable than words” weren’t trying to be an objective claim about readability?

                                                      Sure. I’m done. Have a good one.

                                    2. 24

                                      I would warmly suggest making an effort to hit Page Down twice to get past the syntax bit and read the rest of the post though, because it’s a pretty good and pragmatic take, based on the author’s experience writing and maintaining a kernel. Xous is a pretty cool microkernel which runs on actual hardware, it’s actually a pretty good test of Rust’s promises in terms of safety and security.

                                      1. 10

                                        It’s interesting but also has the weird dichotomy that the only two choices for systems programming are C or Rust. C++ also has a lot of the strengths that the author likes about Rust (easy to write rich generic data structures, for example), and has a bunch of other things that are useful in a kernel, such as support in the standard library for pluggable memory allocators, mechanisms for handling allocation failure, a stable standard library API, and so on.

                                        1. 5

                                          I had exactly the same thought. C++ burnt through a lot of good will in the C++98 era where it was admittedly a hot mess (and all the compilers where buggy dumpster fires). Now on one hand we have people who publicly and loudly swore off touching C++ ever again based on this experience (and even more people parroting the “C++ is a mess” statement without any experience) and on the other the excitement of Rust with all the hype making people invest a large amount of effort into learning it. But the result, as this article shows, is often not all roses. I believe oftentimes the result would have been better if people invested the same amount of time into learning modern C++. Oh well.

                                          1. 5

                                            Writing C++ is like writing Rust but with your whole program wrapped in unsafe{}. You have to manage your memory and hope you did it right.

                                            1. 4

                                              As I hope this article clearly demonstrates, there is a lot more to a language chice than memory safety. Also, FWIW, I write fairly large programs and I don’t find memory management particularly challenging in modern C++. At the same time, I highly doubt that these programs can be rewritten in Rust with the result having comparable performance, compilation times, and portability properties.

                                              1. 1

                                                What would hinder Rust from having comparable performance, compilation times, and portability properties, in your opinion?

                                                1. 1

                                                  In summary:

                                                  Performance: having to resort to dynamic memory allocations to satisfy borrow checker.

                                                  Compilation: in Rust almost everything is a template (parameterized over lifetimes).

                                                  Portability: C/C++ toolchain is available out of the box. I also always have an alternative compiler for each platform.

                                            2. 4

                                              string_view of temporaries makes dangling pointers instead of compilation errors. optional allows unchecked dereferencing without warnings, adding more UB to the modern C++. I haven’t met a C++ user who agrees these are fatal design errors. Sorry, but this is not up to safe Rust’s standards. From Rust perspective modern C++ continues to add footguns that Rust was designed to prevent.

                                              1. 1

                                                I haven’t met a C++ user who agrees these are fatal design errors.

                                                I haven’t used string_view much so can’t categorically say it’s not a design error (it very well may be). But for optional I can certainly say it is a trade-off: you have the choice of checked access (optional::value()) or unchecked and you decide what to use. I personally always use unchecked and never had any problems. Probably because I pay attention to what I am writing.

                                                1. 5

                                                  This is the difference in approaches of the two languages. In C++ if the code is vulnerable, the blame is on the programmer. In Rust if the code is vulnerable, Rust considers it a failure of the language, and takes responsibility to stop even “bad” programmers from writing vulnerable code. I can’t stress enough how awesome it is that I can be a careless fool, and still write perfectly robust highly multi-threaded code that never crashes.

                                                  In terms of capabilities, Rust’s Option is identical, but the the default behavior is safe, and there’s a lot of syntax sugar (match, if let, tons of helper methods) to make the safe usage the preferred option even for “lazy” programmers. The UB-causing version is written unsafe { o.unwrap_unchecked() }, which is deliberately verbose and clunky, so that the dangerous version stands out in code reviews, unlike subtle * or -> that are commonly used everywhere.

                                                  Rust’s equivalent of string_view is &str, and it’s practically impossible to use the language without embracing it, and it’s borrow-checked, so it won’t compile if you misuse it.

                                            3. 2

                                              Eh, maybe the author just didn’t write that much low-level/kernel code in C++. I try not to read too much into these things. If I were to start learning F# tomorrow, then tried to write a similar piece two years from now, I’d probably end up with something that would have the weird dichotomy that the only two choices for functional programming are Scheme and F#.

                                              1. 1

                                                Scheme is honestly so hard to do functional in. It’s shockingly imperitive by nature given the reputation.

                                            4. 3

                                              I did read the entire post, but I wanted to voice that focusing on the wrong thing first makes people not take you seriously, especially when the author expresses it doesn’t matter, but they still decided to make it first?

                                              1. 3

                                                I may not be interpreting this correctly but I didn’t take the author qualifying it as a superficial complaint to mean that it doesn’t matter. Based on the issues he mentions regarding the readability of Rust macros, for example, I think it’s superficial as in “superficial velocity”, i.e. occurring or characterising something that occurs at the surface.

                                                (But note that I may be reading too much into it because reviewing and auditing Rust code that uses macros is really not fun so maybe I’m projecting here…)

                                            5. 20

                                              The final sentence of that section said, in summary, “Rust just has a steep learning curve in terms of syntax”. A critical retrospective that does not mention the horrendous syntax or its learning curve would lack credibility.

                                              1. 4

                                                I find Rust’s syntax perfectly clear and sensible. I am not the only one.

                                              2. 9

                                                I liked that it starts with that TBH. Rust’s dense syntax is probably the first impression of the language for many people—it was for me at least. And putting the author’s first impression first in the article makes it read more like a person telling a story, rather then a list of technical observations sorted by importance.

                                                I like to read stories by humans; i feel it easier to connect with the author and therefore to retain some of what they say. YMMV of course.

                                                1. 2

                                                  And if they think rust is hard to read, wait until they discover lisp!

                                                  (I know this author probably is already familiar with lisp and many other things, but the comparison stands.)

                                                  1. 6

                                                    I find it the other way around. If you temporarily put aside the issues of special forms and macros, the syntax of Lisp is extremely minimal and regular (it’s almost all lists and atoms). So Lisp stands at kind of an opposite extreme from Rust, with more familiar languages somewhere in between.

                                                    1. 5

                                                      Nim still has a dumpLisp routine to show you the shape of an AST you may want to manipulate.

                                                      Syntax can be very personal, but I strongly prefer Nim’s to Rust’s and see no compelling language feature of Rust to tempt me away, though Nim is not without its own issues.

                                                      1. 2

                                                        Nim isn’t really comparable is it? More like Go with a GC etc?

                                                        1. 2

                                                          “How comparable” mostly depends upon what you mean by “a GC etc”. Nim’s (AutomaticRC/OptimizedRC) memory management seems fairly similar to Rust, but I am no Rust expert and most PLs have quite a few choices either directly or ecosystem-wide. (Even C has Boehm.) There is no “GC thread” like Java/Go. The ORC part is for cycle collection. You can statically specify {.acyclic.}, sink, lent, etc. in Nim to help run-time perf. Some links that go into more detail are: https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html https://nim-lang.org/blog/2020/12/08/introducing-orc.html

                                                          1. 0

                                                            “Go with a GC” is Go.

                                                            1. 1

                                                              Yes, that’s why I said it

                                                        2. 2

                                                          The complaint in the article is about noisy hard to read though, and lisp is definitely that, even if it is simple and regular that simplicity leads everything to look the same.

                                                          1. 3

                                                            I always wondered why indentation-based reader macros (SRFI-49 is a simple one) never became popular. I can see “whys” for big macro writer types since they often want to pick apart parse trees and this adds friction there. Most programmers are not that (in any language). My best guess is a kind of community dynamic where tastes of village elders make beginners adapt more. Or wishful promises/hopes for beginners to become elders? Or a bit of both/etc.?

                                                            Of course, part of the regularity is “prefix notation” which can remain a complaint.

                                                      2. 1

                                                        It makes it hard to take the rest of the post seriously

                                                        As x64k said the post is pretty well made I think and some honest criticism. If anything you can criticize the bad blog layout, which has big white bars on mobile and desktop, giving you a hard time reading it from any device.

                                                      1. 2

                                                        A key aspect of Rust is allowing company lock-in. Once you have large teams, you find that lack of a large standard library allows you to create a proprietary custom one. You can create a library, or even an ecosystem, that does prevents skills from transferring to a new position and thus lowers your staffing costs.

                                                        It is easily defended as ‘cutting edge’.

                                                        This leads to the odd question: is it a good idea for a developer to work in Rust?

                                                        1. 27

                                                          A key aspect of Rust is allowing company lock-in. Once you have large teams, you find that lack of a large standard library allows you to create a proprietary custom one. You can create a library, or even an ecosystem, that does prevents skills from transferring to a new position and thus lowers your staffing costs.

                                                          This might be more convincing if, say, crates.io didn’t exist and if Cargo wasn’t built entirely around making a shared ecosystem easy, but as it stands this idea seems, to put it mildly, preposterous – and suggests you’re reaching for uncharitable assertions.

                                                          The job market for developers with experience in a language with a famously batteries-included standard lib, Python, currently pays a lot less than it does for Rust (there are more Python jobs, of course, but your average salary is about $30,000 less per: https://www.zdnet.com/article/heres-how-much-money-you-can-make-as-a-developer-in-2021/). There’s zero reason to imagine that a larger standard library size is correlated with higher pay, or vice-versa.

                                                          1. 16

                                                            I’ve used Rust as several jobs and never seen a company create any kind of alternate standard library for it, proprietary or otherwise. There are a number of well-known crates used for specific things - e.g. reqwests, serde, nom - that are open source, easily available via crates.io, and applicable to projects across multiple firms. Which is similar to the situation in other modern languages with easily-accessible package ecosystems like Python and JavaScript, and is the opposite of discouraging skill transfer.

                                                            1. 14

                                                              The exact same thing can be said of C and its tiny stdlib. Did it lead to this situation you describe?

                                                              1. 12

                                                                You can create a library, or even an ecosystem, that does prevents skills from transferring to a new position and thus lowers your staffing costs.

                                                                Someone would have to be unusually incompetent to be unable to transfer their skills from one standard lib to another. Fundamentals, people! Once you know them, it’s mostly a matter of knowing how stuff is named. You won’t hit the ground running, but then again nobody does. Even when you already know the standard stuff there will be tons of proprietary and bespoke code to wade through.

                                                              1. 4

                                                                Is any server really going to send you data fast enough to justify a huge pipe like that? I have a measly 200mbps connection (1% of that!) and I rarely see my computer receiving anything close to its capacity. Maybe just when I download a new version of Xcode from Apple.

                                                                (Obligatory grandpa boast about how my first modem was 110bps — on a Teletype at my middle school — and I’ve experienced pretty much every generation of modem since, from 300 to 1200 to 2400 to… Of all those, the real game changer was going to an always-on DSL connection in the late 90s.)

                                                                1. 4

                                                                  It’s easy to fill a Gigabit line these days in my experience. With a faster uplink, now all devices at my home can fill at least a Gigabit line, at the same time :)

                                                                  1. 1

                                                                    Filling 1Gbps is trivial, but pumping 25Gbps data would be rather challenging, if you fully utilize the 25Gbps duplex with the NAT. 25Gbps on each direction means 100Gbps throughout for the router. That’s a huge load on the router, for both software and hardware. For benchmarks, you could recent hourly billed hertzer vps, they have 10Gbps connection with a fairly cheap price. I wondering how’s the peering status is this ISP, the 25Gbps doesn’t really mean anything unless you have huge pipes connected to other ASN. Even with dual 100Gbps, the network can only serve 8 customer at full speed, which is :(

                                                                    1. 3

                                                                      init7 peers with hetzner directly, other customers report getting 5+ Gbit/s for their backups to hetzner servers :)

                                                                      The hetzner server I rent only has a 1 Gbit/s port. Maybe I’ll rent an hourly-billed one just for the fun of doing speed tests at some point.

                                                                      1. 1

                                                                        In the mean time, I found this product interesting when searching for ccr2004, at msrp of 199$.

                                                                        https://mikrotik.com/product/ccr2004_1g_2xs_pcie

                                                                        The 2C/3C low-end “cloud” servers has full 10G connection, and it’s available across multiple regions.

                                                                        1. 2

                                                                          What discourages me massively about this device is clunky integration like this:

                                                                          This form-factor does come with certain limitations that you should keep in mind. The CCR NIC card needs some time to boot up compared to ASIC-based setups. If the host system is up before the CCR card, it will not appear among the available devices. You should add a PCIe device initialization delay after power-up in the BIOS. Or you will need to re-initialize the PCIe devices from the HOST system.

                                                                          Also active cooling, which means the noise level is likely above the threshold for my living room :)

                                                                  2. 2

                                                                    DigitalOcean directly peers with my ISP and I can frequently saturate my 1 Gbit FTTH. I use NNCP to batch Youtube downloads I might be interested in and grab them on demand from DO at 1 Gbit, which I have to say is awesome, cause I can download long 4/8K videos in seconds.

                                                                    1. 1

                                                                      It’s pretty easy to saturate that symmetrically once you have multiple people & devices in the mix, eg) stream a 4K HDR10 movie in the living room while a couple of laptops are sending dozens of gigs to backblaze and the kid is downloading a new game from steam.

                                                                      1. 3

                                                                        Not really 4k streaming isn’t that scary, the highest bitrate I’ve ever seen is the spider man form sony at 80Mbps, bb backup over wifi maybe use 1Gbps, and steam download is also capped at 1Gbps. So, it only uses 3Gbps, far from saturated.

                                                                        1. 2

                                                                          Yeah sorry, I meant it’s not hard to saturate GP’s 200Mbps connection. The appeal of 25Gbps is that you’re not going to saturate it no matter what everyone in the house is doing, for at least the next few years.

                                                                      1. 3

                                                                        The problem with that comment is you might actually want a guarantee that name is not None, and therefore not have to unwrap name on each use.

                                                                        1. 3

                                                                          You certainly might!

                                                                          The problem, though, is that this undermines the blog’s central thesis: “look how much harder doing this thing is in unsafe Rust than it is to do the same thing in C”. We’re making this hard in unsafe Rust only by using a different type in Rust than was used in C (a guaranteed not-null & rather than a simple *), and this related insisting on guarantees that it’s never not present so you can’t use Option, when the C version does not guarantee this and makes no effort to enforce it.

                                                                          Which is to say, of course it’s harder to guarantee more in unsafe Rust than it is to guarantee less in C.

                                                                          1. 2

                                                                            Also, what’s really making this hard is that the “this” — partially initializing structs — is a very unidiomatic thing overall. You shouldn’t need to do this, ever.

                                                                            The typical use of unsafe is FFI and fancy pointer-based data structures, none of which require thinking about something as scary as the layout of repr(Rust) structs.

                                                                      1. 57

                                                                        The developer of these libraries intentionally introduced an infinite loop that bricked thousands of projects that depend on ’colors and ‘faker’.

                                                                        I wonder if the person who wrote this actually knows what “bricked” means.

                                                                        But beyond the problem of not understanding the difference between “bricked” and “broke”, this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                                                                        Edit: for those who are not native English speakers, “bricked” refers to a change (usually in firmware on an embedded device) which not only causes the device to be non-functional, but also breaks whatever update mechanisms you would use to get it back into a good state. It means the device is completely destroyed and must be replaced since it cannot be used as anything but a brick.

                                                                        GitHub has reportedly suspended the developer’s account

                                                                        Hopefully this serves as a wakeup call for people about what a tremendously bad idea it is to have all your code hosted by a single company. Better late than never.

                                                                        1. 25

                                                                          There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people (which doesn’t make it any less of a good idea for people to make their code hosting infrastructure independent from Github). The developer was absolutely trolling (in the best sense of the word) and a lot of people have made it cleared that they’re very eager for Github to deplatform trolls.

                                                                          I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                          The right solution here is for any users of these packages to do exactly what the developer suggested and fork them without the broken commits. If npm (or cargo, or any other programming language ecosystem package manager) makes it difficult for downstream clients to perform that fork, this is an argument for changing npm in order to make that easier. Build additional functionality into npm to make it easier to switch away from broken or otherwise-unwanted specific versions of a package anywhere in your project’s dependency tree, without having to coordinate this with other package maintainers.

                                                                          1. 31

                                                                            The developer was absolutely trolling (in the best sense of the word)

                                                                            To the extent there is any good trolling, it consists of saying tongue-in-cheek things to trigger people with overly rigid ideas. Breaking stuff belonging to people who trusted you is not good in any way.

                                                                            I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order

                                                                            And GitHub was free to dump his account for his egregious bad citizenship. I’m glad they did, because this kind of behavior undermines the kind of collaborative trust that makes open source work.

                                                                            to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                            Take it from me: the way to get companies to compensate you “in six figures” for your code is to release your code commercially, not open source. Or to be employed by said companies. Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                                                                            1. 33

                                                                              No I think the greater fool is the one who can’t tolerate changes like this in free software.

                                                                              1. 1

                                                                                It’s not foolish to trust, initially. What’s foolish is to keep trusting after you’ve been screwed. (That’s the lesson of the Prisoner’s Dilemma.)

                                                                                A likely lesson companies will draw from this is that free software is a risk, and that if you do use it, stick to big-name reputable projects that aren’t built on a house of cards of tiny libraries by unknown people. That’s rather bad news for ecosystems like node or RubyGems or whatever.

                                                                              2. 12

                                                                                Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                                                                                Thankyou. This is the point everybody seems to be missing.

                                                                                1. 49

                                                                                  The author of these libraries stopped whining and took action.

                                                                                  1. 3

                                                                                    Worked out a treat, too.

                                                                                    1. 5

                                                                                      I mean, it did. Hopefully companies will start moving to software stacks where people are paid for their effort and time.

                                                                                      1. 6

                                                                                        He also set fire to the building making bombs at home, maybe he’s not a great model.

                                                                                        1. 3

                                                                                          Not if you’re being responsible and pinning your deps though?

                                                                                          Even if that weren’t true though, the maintainer doesn’t have any obligation to companies using their software. If the company used the software without acquiring a support contract, then that’s just a risk of business that the company should have understood. If they didn’t, that’s their fault, not the maintainer’s - companies successfully do this kind of risk/reward calculus all the time in other areas, successfully.

                                                                                          1. 1

                                                                                            I know there are news reports of a person with the same name being taken into custody in 2020 where components that could be used for making bombs were found, but as far as I know, no property damage occurred then. Have there been later reports?

                                                                                          2. 3

                                                                                            Yeah, like proprietary or in-house software. Great result for open source.

                                                                                            Really, if I were a suit at a company and learned that my product was DoS’d by source code we got from some random QAnon nutjob – that this rando had the ability to push malware into his Git repo and we’d automatically download and run it – I’d be asking hard questions about why my company uses free code it just picked up off the sidewalk, instead of paying a summer intern a few hundred bucks to write an equivalent library to printf ANSI escape sequences or whatever.

                                                                                            That’s inflammatory language, not exactly my viewpoint but I’m channeling the kind of thing I’d expect a high-up suit to say.

                                                                                  2. 4

                                                                                    There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people

                                                                                    Each new incident is another feather. For some, it’s the last one to break the camel’s back.

                                                                                    1. 4

                                                                                      in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                                      This sense of entitlement is amusing. This people totally miss the point of free software. They make something that many people find useful and use (Very much thanks to the nature of being released with a free license, mind you), then they feel in their right to some sort of material/monetary compensatiom.

                                                                                      This is not miss universe contest. It’s not too hard to understand that had this project been non free, it would have probably not gotten anywhere. This is the negative side of GitHub. GitHub has been an enormously valuable resource for free software. Unfortunately, when it grows so big, it will inevitably also attract this kind of people that only like the free aspect of free software when it benefits them directly.

                                                                                      1. 28

                                                                                        This people totally miss the point of free software.

                                                                                        An uncanny number of companies (and people employed by said companies) also totally miss the point of free software. They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps. If you made a list of all the companies to whom the description “companies that only like the free aspect of free software when it benefits them directly” doesn’t apply, you could apply a moderately efficient compression algorithm and it would fit in a boot sector.

                                                                                        I don’t want to defend what the author did – as someone else put it here, it’s dumbshittery of an advanced level. But if entitlement were to earn you an iron “I’m an asshole” pin, we’d have to mine so much iron ore on account of the software industry that we’d trigger a second Iron Age.

                                                                                        This isn’t only on the author, it’s what happens when corporate entitlement meets open source entitlement. All the entitled parties in this drama got exactly what they deserved IMHO.

                                                                                        Now, one might argue that what this person did affected not just all those entitled product managers who had some tough explaining to do to their suit-wearing bros, but also a bunch of good FOSS “citizens”, too. That’s absolutely right, but while this may have been unprofessional, the burden of embarrassment should be equally shared by the people who took a bunch of code developed by an independent, unpaid developer, in their spare time – in other words, a hobby project – without any warranty, and then baked it in their super professional codebases without any contingency plan for “what if all that stuff written in all caps happens?”. This happened to be intentional but a re-enactment of this drama is just one half-drunk evening hacking session away.

                                                                                        It’s not like they haven’t been warned – when a new dependency is proposed, that part is literally the first one that’s read, and it’s reviewed by a legal team whose payment figures are eye-watering. You can’t build a product based only on the good parts of FOSS. Exploiting FOSS software only when it benefits yourself may also be assholery of an advanced level, but hoping that playing your part shields you from all the bad parts of FOSS is naivety of an advanced level, and commercial software development tends to punish that.

                                                                                        1. 4

                                                                                          They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps

                                                                                          Slides about F/OSS don’t say that because expensive proprietary software has exactly the same disclaimer. You may have an SLA that requires bugs to be fixed within a certain timeframe, but outside of very specialised markets you’ll be very hard pressed to find any software that comes with any kind of liability for damage caused by bugs.

                                                                                          1. 1

                                                                                            Well… I meant the license, not the slides :-P. Indeed, commercial licenses say pretty much the same thing. However, at least in my experience, the presence of that disclaimer is not quite as obvious with commercial software – barring, erm, certain niches.

                                                                                            Your average commercial license doesn’t require proprietary vendors to issue refunds, provide urgent bugfixes or stick by their announced deadlines for fixes and veatures. But the practical constraints of staying in business are pretty good at compelling them to do some of these things.

                                                                                            I’ve worked both with and without SLAs so I don’t want to sing praises to commercial vendors – some of them fail miserably, and I’ve seen countless open source projects that fix security issues in less time than it takes even competent large vendors to call a meeting to decide a release schedule for the fix. But expecting the same kind of commitment and approachability from Random J. Hacker is just not a very good idea. Discounting pathological arseholes and know-it-alls, there are perfectly human and understandable reasons why the baseline of what you get is just not the same when you’re getting it from a development team with a day job, a bus factor of 1, and who may have had a bad day and has no job description that says “be nice to customers even if you had a bad day or else”.

                                                                                            The universe npm has spawned is particularly susceptible to this. It’s a universe where adding a PNG to JPG conversion function pulls fourty dependencies, two of which are different and slightly incompatible libraries which handle emojis just in case someone decided to be cute with file names, and they’re going to get pulled even if the first thing your application does is throw non-alphanumeric characters out of any string, because they’re nth order dependencies with no config overrides. There’s a good chance that no matter what your app does, 10% of your dependencies are one-person resume-padding efforts that turned out to be unexpectedly useful and are now being half-heartedly maintained largely because you never know when you’ll have to show someone you’re a JavaScript ninja guru in this economy. These packages may well have the same “no warranty” sticker that large commercial vendors put on theirs, but the practical consequences of having that sticker on the box often differ a lot.

                                                                                            Edit: to be clear, I’m not trying to say “proprietary – good and reliable, F/OSS – slow and clunky”, we all know a lot of exceptions to both. What I meant to point out is that the typical norms of business-to-business relations just don’t uniformly apply to independent F/OSS devs, which makes the “no warranty” part of the license feel more… intense, I guess.

                                                                                        2. 12

                                                                                          The entitlement sentiment goes both ways. Companies that expect free code and get upset if the maintainer breaks backward compatibility. Since when is that an obligation to behave responsibly?

                                                                                          When open source started, there wasn’t that much money involved and things were very much in the academic spirit of sharing knowledge. That created a trove of wealth that companies are just happy to plunder now.

                                                                                        3. 1

                                                                                          releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                                                                          Was that honestly the intent? Because in that case: what hubris! These libraries were existing libraries translated to JS. He didn’t do any of the hard work.

                                                                                        4. 8

                                                                                          There is further variation on the “bricked” term, at least in the Android hacker’s community. You might hear things like “soft bricked” which refers to a device that has the normal installation / update method not working, but could be recovered through additional tools, or perhaps using JTAG to reprogram the bootloader.

                                                                                          There is also “hard bricked” which indicates something completely irreversible, such as changing the fuse programming so that it won’t boot from eMMC anymore. Or deleting necessary keys from the secure storage.

                                                                                          1. 3

                                                                                            this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                                                                                            OK, so, what’s a build set up responsibly?

                                                                                            I’m not sure what the expectations are for packages on NPM, but the changes in that colors library were published with an increment only to the patch version. When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                                                                            1. 26

                                                                                              When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                                                                              No, it is a bad idea. Even if the developer isn’t actively malicious, they might’ve broken something in a minor update. You shouldn’t ever blindly update a dependency without testing afterwards.

                                                                                              1. 26

                                                                                                Commit package-lock.json like all of the documentation tells you to, and don’t auto-update dependencies without running CI.

                                                                                                1. 3

                                                                                                  And use npm shrinkwrap if you’re distributing apps and not libraries, so the lockfile makes it into the registry package.

                                                                                                2. 18

                                                                                                  Do you really think that a random developer, however well intentioned, is really capable of evaluating whether or not any given change they make will have any behavior-observable impact on downstream projects they’re not even aware of, let alone have seen the source for and have any idea how it consumes their project?

                                                                                                  I catch observable breakage coming from “patch” revisions easily a half dozen times a year or more. All of it accidental “oh we didn’t think about that use-case, we don’t consume it like that” type stuff. It’s truly impossible to avoid for anything but the absolute tiniest of API surface areas.

                                                                                                  The only sane thing to do is to use whatever your tooling’s equivalent of a lock file is to strictly maintain the precise versions used for production deploys, and only commit changes to that lock file after a full re-run of the test suite against the new library version, patch or not (and running your eyeballs over a diff against the previous version of its code would be wise, as well).

                                                                                                  It’s wild to me that anyone would just let their CI slip version updates into a deploy willynilly.

                                                                                                  1. 11

                                                                                                    This neatly shows why Semver is a broken religion: you can’t just rely on a version number to consider changes to be non-broken. A new version is a new version and must be tested without any assumptions.

                                                                                                    To clarify, I’m not against specifying dependencies to automatically update to new versions per se, as long as there’s a CI step to build and test the whole thing before it goes it production, to give you a chance to pin the broken dependency to a last-known-good version.

                                                                                                    1. 7

                                                                                                      Semver doesn’t guarantee anything though and doesn’t promise anything. It’s more of an indicator of what to expect. Sure, you should test new versions without any assumptions, but that doesn’t say anything about semver. What that versioning scheme allows you to do though is put minor/revision updates straight into ci and an automatic PR, while blocking major ones until manual action.

                                                                                                    2. 6

                                                                                                      The general form of the solution is this:

                                                                                                      1. Download whatever source code you are using into a secure versioned repository that you control.

                                                                                                      2. Test every version that you consider using for function before you commit to it in production/deployment/distribution.

                                                                                                      3. Build your system from specific versions, not from ‘last update’.

                                                                                                      4. Keep up to date on change logs, security lists, bug trackers, and whatever else is relevant.

                                                                                                      5. Know what your back-out procedure is.

                                                                                                      These steps apply to all upstream sources: language modules, libraries, OS packages… dependency management is crucial.

                                                                                                      1. 3

                                                                                                        Amazon does this. Almost no-one else does this, but that’s a choice with benefits (saving the set up effort mostly) and consequences (all of this here)

                                                                                                      2. 6

                                                                                                        When trusting the developers (and if you don’t, why would you use their library?)

                                                                                                        If you trust the developers, why not give them root on your laptop? After all, you’re using their library so you must trust them, right?

                                                                                                        1. 7

                                                                                                          There’s levels to trust.

                                                                                                          I can believe you’rea good person by reading your public posts online, but I’m not letting you babysit my kids.

                                                                                                      3. 2

                                                                                                        Why wouldn’t this behavior be banned by any company?

                                                                                                        1. 2

                                                                                                          How do they ban them, they’re not paying them? Unless you mean the people who did not pin the dependencies?

                                                                                                          1. 4

                                                                                                            I think it is bannable on any platform, because it is malicious behavior - that means he intentionally caused harm to people. It’s not about an exchange of money, it’s about intentional malice.

                                                                                                          2. 1

                                                                                                            Because it’s his code and even the license says “no guarantees” ?

                                                                                                            1. 2

                                                                                                              The behavior was intentionally malicious. It’s not about violating a contract or guarantee. For example, if he just decided that he was being taken advantage of and removed the code, I don’t think that would require a ban. But he didn’t do that - he added an infinite loop to purposefully waste people’s time. That is intentional harm, that’s not just providing a library of poor quality with no guarantee.

                                                                                                              Beyond that, if that loop went unnoticed on a build server and costed the company money, I think he should be legally responsible for those damages.

                                                                                                        1. 3

                                                                                                          I started using Moonlander a year ago and now I’m slowly realizing that what helps me the most isn’t the hardware, but the abilities I get from QMK. I was about to ask if Kinesis is using QMK, but then I checked their website and it seems that the Professional version is using ZMK. ZMK should be quite similar to QMK, does anyone has experience with it?

                                                                                                          1. 4

                                                                                                            I think the ZMK’s raison d’être is Bluetooth support.

                                                                                                            1. 1

                                                                                                              You can make the old Kinesis work with QMK as well - but it might be a bit on an adventure. :)

                                                                                                              https://michael.stapelberg.ch/posts/2020-07-09-kint-kinesis-keyboard-controller/

                                                                                                              1. 1

                                                                                                                ZMK is BSD licensed (I think? Not GPL, anyways), so it can be linked to a proprietary vendor Bluetooth driver blob, which is presumably why they chose it.

                                                                                                                But, last I heard, it doesn’t support macros, mouse keys, or tap dance, unlike QMK. So that’s a bit too disappointing for me to consider it. I’m sticking with my ergodox.

                                                                                                                1. 2

                                                                                                                  MIT, but I think you’re right regarding the bluetooth driver.

                                                                                                              1. 3

                                                                                                                Start by implementing a graph based algorithm.

                                                                                                                My favorite personal projects are all intrinsically graph-based. Is Rust a poor fit for me?

                                                                                                                1. 3

                                                                                                                  Use something like petgraph you will be fine. Or if you’re implementing it by hand, store the nodes in an arena/vec and instead of pointers use indicies.

                                                                                                                  It’s a bit of a bigger step for newcomers, because you’re learning both “rust” and “how to deal with graphs in rust without fighting the borrow checker” at the same time, instead of one and then the other.

                                                                                                                  1. 1

                                                                                                                    The takeaway message here isn’t “graphs are impossible, or even particularly hard to do in Rust (once you know Rust)”.

                                                                                                                    It’s that Rust makes you think very explicitly about the hardest part of graph programming (provably avoiding dangling pointers in all cases, in all codepaths), and that trying to work at this level of explicitness in a language you haven’t actually learned yet is an easy way to self-sabotage the learning process.

                                                                                                                    Learn Rust. Learn lifetimes and ownership and how those things interrelate to allow for correct memory management, and how Rust deals with the heap and runtime ways of enforcing ownership rules, etc. Learn about strategies like arena allocation for when you want children to be able to point at their parents without owning their memory. Then build your fun graph projects.

                                                                                                                    Just don’t try to walk and chew bubblegum at the same time when you’re not even crawling yet.

                                                                                                                    1. 1

                                                                                                                      Same for me and I still manage to write them. Simply be sure to understand that you’ll have to suffer more than other people learning Rust if you start here. The best working approach is the arena+index one but it needs you to be very cautious (test units help, of course) as it means you’re going around the protection given by the ownership model.

                                                                                                                    1. 7

                                                                                                                      Volunteers are doing their best in their spare time out of passion, or because they are (or were) having fun. They feel tremendous responsibility

                                                                                                                      This is the biggest part of the issue. Why feel responsibility to people who don’t care about you?

                                                                                                                      1. 7

                                                                                                                        Why feel responsibility to people who don’t care about you?

                                                                                                                        It’s a fair question intellectually, but the emotional reality of open source maintainership is that when you’ve built something you care about and people are yelling at you about how some bug you feel responsible for not catching is ruining their lives, that has an emotional impact regardless of how logical it might seem to disregard any voice that isn’t paying me.

                                                                                                                        We are, at the end of the day, human.

                                                                                                                        1. 1

                                                                                                                          Agreed. I’m not sure paying helps that particular problem. Being less shitty to other people would help, but I don’t see that happening either…

                                                                                                                        2. 6

                                                                                                                          I have some projects where the process is like this: 1) make something because I want it, 2) put the code on GitHub and lightly publicise it because it’s something other people may want, 3) have it as my active project for a while, then 4) my interest in the project declines (it does what I need it to do) as its popularity and issue/pull request counts increase.

                                                                                                                          Your choices are: 1) invest much more time than you’re comfortable with fixing issues which don’t affect you for the benefit of people you don’t know, 2) invest a lot of effort to find a new maintainer and gamble with a lot of other people’s security, 3) officially kill it off and hope someone forks it, or 4) leave it unattended and constantly feel bad as issue counts, pull request counts and stars accrue. None of the options are easy and none of the options are good.

                                                                                                                          Having a somewhat popular open source project is the dream of every programmer who hasn’t had a somewhat popular open source project.

                                                                                                                          1. 2

                                                                                                                            Yes, I usually pick 4 and figure if people really care about it they’ll offer to help. But you’re right none are ideal. But if someone came and offered to pay me full time for one of my “popular” projects I wouldn’t take it anyway. I guess I would suggest they the money and pay someone else to do maintenance work.

                                                                                                                          2. 2

                                                                                                                            Should open source projects deliberately introduce more sharp edges to keep users on their toes, and maybe induce them to pay for support? Serious question

                                                                                                                            1. 2

                                                                                                                              No, but if I do something as an unpaid passion project then I do it in the time I have for it and do what I like with it. If you want prompt security updates, that’s up to you. In most cases paying me would not help, since in my case I’m kinda busy.

                                                                                                                          1. 19

                                                                                                                            I would take this request more seriously if cryptographers* wrote safer C code, or even better, used [a small subset of] C++. As it is, this comes off as a request for the compiler to save them from their highly dangerous coding styles. In C crypto APIs and internals I see a lot of:

                                                                                                                            • Functions that declare key and digest parameters as void*. This completely guts the compiler’s ability to type-check, or even to verify the length of the input data.
                                                                                                                            • Closely related, a lack of data types. Why isn’t there a struct AES256? Every time I have to declare a char key[32] it’s an opportunity to get the length wrong, and having an AES_256_KEYLEN macro isn’t a big improvement.
                                                                                                                            • Returning values as untyped “out” parameters, e.g.void *out_digest. This obscures the API and has the same lack of typing. Why not just return a SHA256 struct?
                                                                                                                            • Lack of “non-null” parameter annotations. GCC and Clang have supported these for years, and with the static analyzers and runtime sanitizers they have saved my butt many times. Even better, why not use C++ references, which strictly enforce this at compile time?
                                                                                                                            • Overly general functions that support multiple types/algorithms, making it impossible for the machine to verify type mismatches. Stuff like “if you pass kAES256CTR as the algorithm, then the key must point to an AES256 key and the padding must be XX bytes; or if you pass kRot13 as the algorithm…” (I’m looking at you, Apple: your execrable Keychain API takes this to insane degrees, with functions that take 2 parameters that can be interpreted in dozens of ways. Not making this up. Trying to use SecItemCopyMatching is like playing a sadistic escape-room game where you spend hours clicking everything in sight trying not to crash, then throw the computer at the wall.)

                                                                                                                            I’m not just whining about this stuff. I wrote a whole C++ wrapper around Monocypher that demonstrates what I think a safer, idiomatic, hard-to-misuse crypto API should be like.

                                                                                                                            * Forgive the hyperbole, I can only speak of the authors of the C crypto APIs I’ve looked at, which include NaCl, libSodium, Monocypher, Intel’s CDSA, and Apple’s Security framework.

                                                                                                                            1. 3

                                                                                                                              What are your thoughts on meeting these folks in the middle? It seems like the whole cryptostack has this stockholm syndrome going on with unsafe languages.

                                                                                                                              • A subset of C++ that is checked by some tool for correctness, etc. I don’t know exactly how this would work, but yeah, why not.
                                                                                                                              • Use a tool like Coq and emit C/C++
                                                                                                                              • Use Rust.
                                                                                                                              • Use Rust, but convert it to C. Maybe Rust -> Wasm -> C. Alon Zakai has been exploring using Rust in this way with WasmBoxC
                                                                                                                              • Please
                                                                                                                              1. 6

                                                                                                                                Fiat-Crypto is “Use Coq and emit C”. It is in production: Firefox uses it since Firefox 69 (2019). We should complete verified cryptography stack and encourage everyone to switch from OpenSSL to it.

                                                                                                                                1. 3

                                                                                                                                  Everything that expects “Rust will make everything safe” misses the point: Rust eliminates universal vulnerabilities, but it doesn’t prevent information leakage through timing. Nor does Coq. There are a whole series of issues Rust eliminate, but information leakage based off of timing is not one of them. It may accidentally eliminate a subset of timing issue by flagging them at compile time, but not for anything that’s unbound.

                                                                                                                                  Safety is a spectrum. Maybe Rust will get a decorator at some point that will require fixed-time loops for any functions used within a certain boundary, but that’s not something that can be done right now.

                                                                                                                                  1. 8

                                                                                                                                    Doesn’t C and Rust both have issues when it comes to side channels? How is Rust different in this regard than C? I was addressing /u/snej ’s points.

                                                                                                                                    I see points like this brought up when it comes to using something better and feels like the bar to replace C is higher than the bar for C itself. That unless something solves all the problems, even the ones that the original system doesn’t even address, that it cannot be used as a replacement. Does this have a name?

                                                                                                                                    Are you saying that we shouldn’t use Rust or Coq to change the position on the spectrum of crypto software?

                                                                                                                                    1. 9

                                                                                                                                      Does this have a name?

                                                                                                                                      “Letting the perfect be the enemy of the good” is the most general form of the phenomenon of refusing an improvement on the original simply because it doesn’t solve every possible problem with the original

                                                                                                                                    2. 8

                                                                                                                                      That’s the difference between safety and privacy/security. Rust claims to be neither private nor secure; that is, I recall no claim that Rust code is inherently secure, or that data used by a Rust program is thereby kept private. Respectfully, that’s a straw-man argument, and I reject the notion that any of these exist on the same spectrum. Rather, Rust’s claim is to make as yet unsafe things safe, and at that it largely succeeds.

                                                                                                                                      Now, all that being said, I’m sure there are formal methods projects out there that do aim for that.

                                                                                                                                  2. 2

                                                                                                                                    It isn’t just cryptographers that would benefit from sane semantics.

                                                                                                                                    1. 2

                                                                                                                                      Here is a more direct link for your “not making this up” reference.

                                                                                                                                    1. 6

                                                                                                                                      We have already seen how disruptive changing language can be when the Python cryptography package added a Rust dependency which in turn changed the list of supported platforms and caused a lot of community leaders to butt heads.

                                                                                                                                      My understanding of the situation was that it only broke the package on unsupported platforms, that others were unofficially supporting downstream. Said others also missed the warning on the mailing list months in advance (AFAICT because they simply weren’t following it and/or it wasn’t loud enough), and frankly that’s kind of alarming given that it’s a security package.

                                                                                                                                      Link to the previous discussion of this whole controversy: https://lobste.rs/s/f4chm2/dependency_on_rust_removes_support_for

                                                                                                                                      1. 5

                                                                                                                                        Since a few people on HN misinterpreted the purpose of mentioning that example, I’ll preemptively quote here my reasoning behind its addition to the article:

                                                                                                                                        The point about the Python package example is not to say that Zig can get on platforms where Rust can’t, but rather that the C infrastructure that we all use is not that easy to replace and every time you touch something, regardless of how decrepit and broken it might have been, you will irritate and break someone else’s use case, which can be a necessary evil sometimes but not always.

                                                                                                                                        1. 3

                                                                                                                                          I’m still not sure you’ve chosen a good example – that Python cryptography package was already defacto-broken on those unsupported platforms (in the sense that crypto that the developers have never even attempted to make work correctly with the build dependencies that were in use should never have been trusted to encrypt anything) long before Rust ever showed up on the scene. The entire controversy around that package was about people mistaking “can be coerced into compiling on” with “works correctly on”, and thereby assuming and foisting on to end users a lot of ill-conceived, dangerous risks.

                                                                                                                                          “Zig would have allowed people to keep lying to themselves in this case” seems…uncompelling. There must be better examples & arguments that could be made here? Not “irritating and breaking” somebody’s broken, dangerous usecase is just not a selling point. It’s rather the opposite if you’re a package author who would prefer that people didn’t continue to build an unsupported footgun for users to shoot themselves with out of your work.

                                                                                                                                          And this all misses teh_cyanz’s larger point, which is that the quote

                                                                                                                                          changed the list of supported platforms

                                                                                                                                          is simply incorrect. It continued to build on all supported platforms. It broke the build for some people on platforms that had never been supported (this was explicitly stated by the maintainer), but who had just happened to hack together builds that may or may not have ever even worked correctly.

                                                                                                                                      1. 2

                                                                                                                                        The author pronounces it [aɡe̞], like the Italian “aghe”.

                                                                                                                                        Does the author mean aghi? Aghe is not an Itailan word.

                                                                                                                                        Now I am confused. Is the pronunciation as ah-gee (as you would say aghi in Italian) or ah-geh (as you would pronounce aghe in Italian if that were a word)?

                                                                                                                                        1. 3

                                                                                                                                          It seems unlikely to me that the majority of people who encounter this library at its point of use will think to investigate how it’s pronounced. I expect most will assume it’s the English word. Naming things is hard; there are many pitfalls.

                                                                                                                                          1. 3

                                                                                                                                            I’m also confused. It links to google translate which translates it to “needles”, but I’ve never heard the word pluralized like that. I’m guessing it comes from FiloSottile’s dialect.

                                                                                                                                            1. 1

                                                                                                                                              I think I just got it. The link to google translate is there so you can play the pronunciation and not to translate it to English. I guess that’s helpful for everyone that is not Italian, lol.

                                                                                                                                            2. 2

                                                                                                                                              The latter. I’m sure he used to describe it as pronounced the Japanese way, but perhaps even fewer people understand that :-)

                                                                                                                                              1. 1

                                                                                                                                                I also thought was pronounced like in chicken karaage. However, I now suspect I pronounce that wrong also since I say “ah-hey” rather than “ah-geh”

                                                                                                                                                1. 3

                                                                                                                                                  Heh yeah, for the record you’re pronouncing it wrong - a mora consisting of a g followed by any vowel is always a hard G sound in Japanese.

                                                                                                                                                  So it’s kah-rah-ah-geh (more or less, in a standardish American accent, although with no aspiration because those Hs are just there to steer you towards the right vowel sound, and with the vowel sounds held for a somewhat shorter period of time than you might default to)

                                                                                                                                            1. 10

                                                                                                                                              Q: Why choose Docker or Podman over Nix or Guix?

                                                                                                                                              Edit with some rephrasing: why run containers over a binary cache? They can both do somewhat similar things in creating a reproductible build (so long as you aren’t apt upgradeing in your container’s config file) and laying out how to glue you different services together, but is there a massive advantage with one on the other?

                                                                                                                                              1. 28

                                                                                                                                                I can’t speak for the OP, but for myself there are three reasons:

                                                                                                                                                1. Docker for Mac is just so damn easy. I don’t have to think about a VM or anything else. It Just Works. I know Nix works natively on Mac (I’ve never tried Guix), but while I do development on a Mac, I’m almost always targeting Linux, so that’s the platform that matters.

                                                                                                                                                2. The consumers of my images don’t use Nix or Guix, they use Docker. I use Docker for CI (GitHub Actions) and to ship software. In both cases, Docker requires no additional effort on my part or on the part of my users. In some cases I literally can’t use Nix. For example, if I need to run something on a cluster controlled by another organization there is literally no chance they’re going to install Nix for me, but they already have Docker (or Podman) available.

                                                                                                                                                3. This is minor, I’m sure I could get over it, but I’ve written a Nix config before and I found the language completely inscrutable. The Dockerfile “language”, while technically inferior, is incredibly simple and leverages shell commands I already know.

                                                                                                                                                1. 15

                                                                                                                                                  I am not a nix fan, quite the opposite, I hate it with a passion, but I will point out that you can generate OCI images (docker/podman) from nix. Basically you can use it as a Dockerfile replacement. So you don’t need nix deployed in production, although you do need it for development.

                                                                                                                                                  1. 8

                                                                                                                                                    As someone who is about to jump into nixos, Id love to read more about why you hate nix.

                                                                                                                                                    1. 19

                                                                                                                                                      I’m not the previous commenter but I will share my opinion. I’ve given nix two solid tries, but both times walked away. I love declarative configuration and really wanted it to work for me, but it doesn’t.

                                                                                                                                                      1. the nix language is inscrutable (to use the term from a comment above). I know a half dozen languages pretty well and still found it awkward to use
                                                                                                                                                      2. in order to make package configs declarative the config options need to be ported to the nix language. This inevitably means they’ll be out of date or maybe missing a config option you want to set.
                                                                                                                                                      3. the docs could be much better, but this is typical. You generally resort to looking at the package configs in the source repo
                                                                                                                                                      4. nix packages, because of the design of the system, has no connection to real package versions. This is the killer for me, since the rest of the world works on these version numbers. If I want to upgrade from v1.0 to v1.1 there is no direct correlation in nix except for a SHA. How do you find that out? Look at the source repo again.
                                                                                                                                                      1. 4

                                                                                                                                                        This speaks to my experience with Nix too. I want to like it. I get why it’s cool. I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg) and the thing I want most is to define my /etc files in their native tongue under version control and for it all to work out rather than depend on Nix rendering the same files. I could even live with Nix-the-language if that were the case.

                                                                                                                                                        1. 3

                                                                                                                                                          I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg)

                                                                                                                                                          As a former Google SRE, I completely agree—GCL has a lot of quirks. On the other hand, nothing outside Google compares, and I miss it dearly. Abstracting complex configuration outside the Google ecosystem just sucks.

                                                                                                                                                          Yes, open tools exist that try to solve this problem. But only gcl2db can load a config file into an interactive interface where you can navigate the entire hierarchy of values, with traces describing every file:line that contributed to the value at a given path. When GCL does something weird, gcl2db will tell you exactly what happened.

                                                                                                                                                        2. 2

                                                                                                                                                          Thanks for the reply. I’m actually not a huge fan of DSLs so this might be swaying me away from setting up nixos. I have a VM setup with it and tbh the though of me trolling through nix docs to figure out the magical phrase to do what I want does not sound like much fun. I’ll stick with arch for now.

                                                                                                                                                          1. 6

                                                                                                                                                            If you want the nix features but a general purpose language, guix is very similar but uses scheme to configure.

                                                                                                                                                            1. 1

                                                                                                                                                              I would love to use Guix, but lack of nonfree is killer as getting Steam running is a must. There’s no precedence for it being used in the unjamming communities I participate in, where as Nix is has sizable following.

                                                                                                                                                              1. 2

                                                                                                                                                                So use Ubuntu as the host OS for Guix if you need Steam to work. Guix runs well on many OS

                                                                                                                                                        3. 10

                                                                                                                                                          Sorry for the very late reply. The problem I have with nixos is that it’s anti-abstraction in the sense that I elaborated on here. Instead it’s just the ultimate wrapper.

                                                                                                                                                          To me, the point of a distribution is to provide an algebra of packages that’s invariant in changes of state. Or to reverse this idea, an instance of a distribution is anything with a morphism to the category of packages.

                                                                                                                                                          Nix (and nixos) is the ultimate antithesis of this idea. It’s not a morphism, it’s a homomorphism. The structure is algebraic, but it’s concrete, not abstract.

                                                                                                                                                          People claim that “declarative” configuration is good, and it’s hard to attack such a belief, but people don’t really agree on what really means. In Haskell it means that expressions have referential transparency, which is a good thing, but in other contexts when I hear people talk about declarative stuff I immediately shiver expecting the inevitable pain. You can “declare” anything if you are precise enough, and that’s what nix does, it’s very precise, but what matters is not the declarations, but the interactions and in nix interaction means copying sha256 hashes in an esoteric programming language. This is painful and as far away from abstraction as you can get.

                                                                                                                                                          Also notice that I said packages. Nix doesn’t have packages at all. It’s a glorified build system wrapper for source code. Binaries only come as a side effect, and there are no first class packages. The separation between pre-build artefacts and post-build artefacts is what can enable the algebraic properties of package managers to exist, and nix renounces this phase distinction with prejudice.

                                                                                                                                                          To come to another point, I don’t like how Debian (or you other favorite distribution) chooses options and dependencies for building their packages, but the fact that it’s just One Way is far more important to me than a spurious dependency. Nix, on the other hand, encourages pets. Just customize the build options that you want to get what you want! What I want is a standard environment, customizability is a nightmare, an anti-feature.

                                                                                                                                                          When I buy a book, I want to go to a book store and ask for the book I want. With nix I have to go to a printing press and provide instructions for printing the book I want. This is insanity. This is not progress. People say this is good because I can print my book into virgin red papyrus. I say it is bad exactly for the same reason. Also, I don’t want all my prints to be dated January 1, 1970.

                                                                                                                                                      2. 8

                                                                                                                                                        For me personally, I never chose Docker; it was chosen for me by my employer. I could maybe theoretically replace it with podman because it’s compatible with the same image format, which Guix (which is much better designed overall) is not. (But I don’t use the desktop docker stuff at all so I don’t really care that much; mostly I’d like to switch off docker-compose, which I have no idea whether podman can replace.)

                                                                                                                                                        1. 3

                                                                                                                                                          FWIW Podman does have a podman-compose functionality but it works differently. It uses k8s under the hood, so in that sense some people prefer it.

                                                                                                                                                        2. 2

                                                                                                                                                          This quite nicely sums up for me 😄 and more eloquently than I could put it.

                                                                                                                                                          1. 2

                                                                                                                                                            If you’re targeting Linux why aren’t you using a platform that supports running & building Linux software natively like Windows or even Linux?

                                                                                                                                                            1. 12

                                                                                                                                                              … to call WSL ‘native’ compared to running containers/etc via VMs on non-linux OS’s is a bit weird.

                                                                                                                                                              1. 11

                                                                                                                                                                I enjoy using a Mac, and it’s close enough that it’s almost never a problem. I was a Linux user for ~15 years and I just got tired of things only sorta-kinda working. Your experiences certainly might be different, but I find using a Mac to be an almost entirely painless experience. It also plays quite nicely with my iPhone. Windows isn’t a consideration, every time I sit down in front of a Windows machine I end up miserable (again, YMMV, I know lots of people who use Windows productively).

                                                                                                                                                                1. 3

                                                                                                                                                                  Because “targeting Linux” really just means “running on a Linux server, somewhere” for many people and they’re not writing specifically Linux code - I spend all day writing Go on a mac that will eventually be run on a Linux box but there’s absolutely nothing Linux specific about it - why would I need Linux to do that?

                                                                                                                                                                  1. 2

                                                                                                                                                                    WSL2-based containers run a lightweight Linux install on top of Hyper-V. Docker for Mac runs a lightweight Linux install on top of xhyve. I guess you could argue that this is different because Hyper-V is a type-1 hypervisor, whereas xhyve is a type-2 hypervisor using the hypervisor framework that macOS provides, but I’m not sure that either really counts as more ‘native’.

                                                                                                                                                                    If your development is not Linux-specific, then XNU provides a more complete and compliant POSIX system than WSL1, which are the native kernel POSIX interfaces for macOS and Windows, respectively.

                                                                                                                                                                2. 9

                                                                                                                                                                  Prod runs containers, not Nix, and the goal is to run the exact same build artifacts in Dev that will eventually run in Prod.

                                                                                                                                                                  1. 8

                                                                                                                                                                    Lots of people distribute dockerfiles and docker-compose configurations. Podman and podman-compose can consume those mostly unchanged. I already understand docker. So I can both use things other people make and roll new things without using my novelty budget for building and running things in a container, which is basically a solved problem from my perspective.

                                                                                                                                                                    Nix or Guix are new to me and would therefore consume my novelty budget, and no one has ever articulated how using my limited novelty budget that way would improve things for me (at least not in any way that has resonated with me).

                                                                                                                                                                    Anyone else’s answer is likely to vary, of course. But that’s why I continue to choose dockerfiles and docker-compose files, whether it’s with docker or podman, rather than Nix or Guix.

                                                                                                                                                                    1. 5

                                                                                                                                                                      Not mentioned in other comments, but you also get process / resource isolation by default on docker/podman. Sure, you can configure service networking, cgroups, namespaces on nix yourself, just like any other system and setup the relevant network proxying. But getting that prepackaged and on by default is very handy.

                                                                                                                                                                      1. 2

                                                                                                                                                                        You can get a good way there without much fuss with using the Declarative NixOS containers feature (which uses systemd-nspawn under the hood).

                                                                                                                                                                      2. 4

                                                                                                                                                                        I’m not very familiar with Nix, but I feel like a Nix-based option could do for you what a single container could do, giving you the reproducibility of environment. What I don’t see how to do is something comparable to creating a stack of containers, such as you get from Docker Compose or Docker Swarm. And that’s considerably simpler than the kinds of auto-provisioning and wiring up that systems like Kubernetes give you. Perhaps that’s what Nix Flakes are about?

                                                                                                                                                                        That said I am definitely feeling like Docker for reproducible developer environments is very heavy, especially on Mac. We spend a significant amount of time rebuilding containers due to code changes. Nix would probably be a better solution for this, since there’s not really an entire virtual machine and assorted filesystem layering technology in between us and the code we’re trying to run.

                                                                                                                                                                        1. 3

                                                                                                                                                                          Is Nix a container system…? I though it was a package manager?

                                                                                                                                                                          1. 3

                                                                                                                                                                            It’s not, but I understand the questions as “you can run a well defined nix configuration which includes your app or a container with your app; they’re both reproducible so why choose one of the over the other?”

                                                                                                                                                                          2. 1

                                                                                                                                                                            It’s possible to generate Docker images using Nix, at least, so you could use Nix for that if you wanted (and users won’t know that it’s Nix).

                                                                                                                                                                            1. 1

                                                                                                                                                                              These aren’t mutually exclusive. I run a few Nix VMs for self-hosting various services, and a number of those services are docker images provided by the upstream project that I use Nix to provision, configure, and run. Configuring Nix to run an image with hash XXXX from Docker registry YYYY and such-and-such environment variables doesn’t look all that different from configuring it to run a non-containerized piece of software.

                                                                                                                                                                            1. 7

                                                                                                                                                                              I don’t follow the issue here. If glibc always versioned symbols and musl never versioned symbols, then every symbolic reference would be completely unambiguous. Alpine binaries would always use musl, and newly compiled binaries might use glibc, but their dependencies would use musl (so both in one process), but every reference is unambiguous.

                                                                                                                                                                              This requires basic ABI cleanliness - things like a library cannot malloc() and expect its caller to free(), since those might be calls to different C libraries. Each module needs to not leak its C library implementation, but that’s achievable (although not necessarily achieved.)

                                                                                                                                                                              I think the issue is more that glibc doesn’t always version symbols, and ELF doesn’t have a two level namespace, unlike OS X. OS X uses library + symbol to resolve symbols, whereas ELF traditionally only looked for a symbol (without library) which is very problematic once two C libraries are loaded into one process and are trying to resolve the same symbol name. I’d be interested to know if Linux has changed in this regard; obviously each binary encodes which shared libraries it needs, so the symbol lookup not checking for a library name was always a bit odd. It’s also the basis for how things like LD_PRELOAD work, because they can resolve a symbol from a completely different library.

                                                                                                                                                                              If I’m right, the problem isn’t symbol versioning, it’s the lack of symbol versioning. With no symbol versioning, alpine-glibc wouldn’t have got off the ground.

                                                                                                                                                                              1. 7

                                                                                                                                                                                This requires basic ABI cleanliness - things like a library cannot malloc() and expect its caller to free(), since those might be calls to different C libraries.

                                                                                                                                                                                I think you have partially answered your own question. It’s really common for C libraries to have functions that take ownership of their arguments and require that those are heap allocated values, assuming they can free() them when they’re done, or conversely they’ll malloc() something and return it to the caller, and part of the specified API is that the caller is responsible for free()-ing it. So it’s actually really really common for one library to malloc() and another to free(), and if they’re calling out to different implementations of libc, Bad Things will happen.

                                                                                                                                                                                Additionally any library that expects to “own” part of the process state (which definitely includes libc) is going to run into problems with multiple copies/versions of it linked into the same process. E.g. from How To Corrupt An SQLite Database File:

                                                                                                                                                                                As pointed out in the previous paragraph, SQLite takes steps to work around the quirks of POSIX advisory locking. Part of that work-around involves keeping a global list (mutex protected) of open SQLite database files. But, if multiple copies of SQLite are linked into the same application, then there will be multiple instances of this global list. Database connections opened using one copy of the SQLite library will be unaware of database connections opened using the other copy, and will be unable to work around the POSIX advisory locking quirks. A close() operation on one connection might unknowingly clear the locks on a different database connection, leading to database corruption.

                                                                                                                                                                                The scenario above sounds far-fetched. But the SQLite developers are aware of at least one commercial product that was released with exactly this bug. The vendor came to the SQLite developers seeking help in tracking down some infrequent database corruption issues they were seeing on Linux and Mac. The problem was eventually traced to the fact that the application was linking against two separate copies of SQLite. The solution was to change the application build procedures to link against just one copy of SQLite instead of two.

                                                                                                                                                                                1. 5

                                                                                                                                                                                  So it’s actually really really common for one library to malloc() and another to free()

                                                                                                                                                                                  Well, where I’m from, that’s a very clear no-no. There’s probably cultural differences at play here, but note that on Windows, the version of the C runtime library is determined by whatever compiler the application developer chooses, and any library exposing a stable ABI for application developers can’t know that in advance. Binary compatibility, when combined with continual C runtime library changes, implies a need to eliminate this pattern.

                                                                                                                                                                                  This leads to a handful of fairly simple patterns that always need to be followed:

                                                                                                                                                                                  1. Expose a function that can indicate the size of an allocation, have the caller allocate, and call a second time (or possibly different function) with an appropriate buffer;
                                                                                                                                                                                  2. Have a library allocate an object and expose a handle (opaque pointer) where all operations on the pointer, including its destruction, are owned by the module that allocated it;
                                                                                                                                                                                  3. In rare cases, specify the allocator used to interchange allocations across a module boundary. This is probably the least common and least clean, but lingers in some places, eg. clipboard.
                                                                                                                                                                                  1. 7

                                                                                                                                                                                    On Linux, from-source builds are the norm; binary compatibility is indeed fiddlier than I expect it is on proprietary platforms, and shipping cross-distro dynamically linked binaries can be done but is definitely a second-class citizen. Typically, a distro will have one version of libc installed, and if you want to drop a binary onto that distro you have a couple scenarios:

                                                                                                                                                                                    1. The binary was dynamically linked, and built against a compatible version of libc.
                                                                                                                                                                                    2. The binary was dynamically linked, and built against an incompatible version of libc. you’re probably SOL if you can’t build from source.
                                                                                                                                                                                    3. The binary is statically linked, in which case it doesn’t use any system libraries and talks directly to the kernel ABI (which is very stable).

                                                                                                                                                                                    For proprietary software vendors static linking isn’t a great option because LGPL licensed libraries require that the user can swap out a compatible version of the library – dynamic linking is usually how this is achieved if you’re not distributing source. So these vendors tend to go to the trouble of making sure (1) is the case, which can be done if the target system uses some version of glibc and you compile against a version that is at least as old as anything your users might be using.


                                                                                                                                                                                    In any case, the fact that this pattern does work on basically any Linux system means that applications will inevitably rely on it, and building a system where you might have two versions of malloc()/free() in the same process is indeed a recipie for disaster; the article is right to suggest that this is a terrible idea.

                                                                                                                                                                                    (Vague tangent; I read somewhere that internally at Google, if a service is exceeding its SLO, the maintainers will intentionally introduce artificial downtime, to ensure that users aren’t relying on it being more reliable than it’s supposed to be).

                                                                                                                                                                                  2. 5

                                                                                                                                                                                    So it’s actually really really common for one library to malloc() and another to free()

                                                                                                                                                                                    IME that’s not very common, and it’s actually very poor library design to return pointers that need to be cleaned up by the user directly with free()/delete.

                                                                                                                                                                                    The libraries I’ve used almost always provide explicit cleanup functions to handle the pointers they return. GDAL, to pick an example I’ve used recently, has a whole bunch of Create* functions that return pointers, and bunch of corresponding Destroy* functions that clean up after them.

                                                                                                                                                                                    Not only does it avoid the problem of C library differences, but it also lets the library swap in different mallocs (such as jemalloc) without impacting developers using the library, and it simplifies using the library from languages other than C (if I use your library from Python or Haskell, how do I ‘free’ a pointer?).

                                                                                                                                                                                    I’m not saying it doesn’t happen, but IMO it’s a “code smell”, and a good indicator I should look for a different library.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      I’m not saying it doesn’t happen, but IMO it’s a “code smell”, and a good indicator I should look for a different library.

                                                                                                                                                                                      Fair enough. “really common” isn’t exactly a precise number and I don’t have hard numbers for you; I would agree that it is the minority relative to encapsulating allocation in wrapper functions, but it is common enough that, in the context of the article, it doesn’t really matter whether it’s good or bad, if your system breaks code that does this you’re going to have a bad time.

                                                                                                                                                                                    2. 1

                                                                                                                                                                                      It’s really common for C libraries to have functions that take ownership of their arguments and require that those are heap allocated values, assuming they can free() them when they’re done, or conversely they’ll malloc() something and return it to the caller, and part of the specified API is that the caller is responsible for free()-ing it. So it’s actually really really common for one library to malloc() and another to free(), and if they’re calling out to different implementations of libc, Bad Things will happen.

                                                                                                                                                                                      I’m sorry, you’re posting with the voice of authority but I think you’re just wrong. That’s bad API design and it’s just begging for crashes. APIs that start off like that might be common, but sooner rather than later (assuming user adoption) someone is going to point this out and after the requisite denial period it’ll get fixed or the project will go nowhere because no serious user will integrate such a library into their code.

                                                                                                                                                                                      The biggest culprit is poorly designed C++ libraries that pass objects across API boundaries, which are then implicitly freed, but C APIs are much more resilient by nature since they typically pass pointers instead. Example: https://github.com/yue/yue/issues/82

                                                                                                                                                                                    3. 3

                                                                                                                                                                                      The problem is:

                                                                                                                                                                                      However, [alpine-glibc] is conceptually flawed, because it uses system libraries where available, which have been compiled against the musl C library.

                                                                                                                                                                                      So you’ve got an unholy, leaky mix of versioned and unversioned symbols from two different implementations calling into each other

                                                                                                                                                                                      1. 3

                                                                                                                                                                                        …with their own mallocs

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      A weird choice by Apple I think, to handle images differently. I wonder what this means in the future, with new technology, and whether it’ll really start staying behind.

                                                                                                                                                                                      1. 12

                                                                                                                                                                                        It’s not that weird; makes it easier for them to implement the codecs in one dylib that’s shared by all applications (saving RAM) and can use whatever hardware-specific stuff they use on various devices to codec the bits without exposing those implementation details to the world.

                                                                                                                                                                                        1. 7

                                                                                                                                                                                          Indeed. Image codecs are an attack surface (a few jailbreaks were thanks to TIFF decoder), so it’s better to have fewer, better tested copies.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            And applications can use it, too. In TenFourFox we used the OS X AltiVec-accelerated built-in JPEG decoder to get faster JPEGs “for free.”

                                                                                                                                                                                          2. 6

                                                                                                                                                                                            From a user perspective I think it would be weirder if they didn’t do this – “oh you can view this image of type X in Safari but not Preview.app, because the decoder statically linked in the former, but Preview.app can render type Y quickly because it leverages the core OS codec dylib but Safari doesn’t include a decoder for that one, or it only has some ultra-slow battery-eating software decoder someone contributed to WebKit”.

                                                                                                                                                                                            Doing it in one shared set of libraries for everything means that support is consistent, it’s easier to audit attack surface across the board (which Apple already struggles with, so I’d hardly encourage them to increase that surface), and optimizations only need to happen in exactly one place to leverage GPU features or custom IC blocks on their mobile SOCs. For a mobile browser you really want as few pure-software decoders as you can get away with for battery life reasons (more-so for video than stills, but things like HEIF are starting to be reasonably heavyweight to decode without hardware support on image-heavy pages).

                                                                                                                                                                                          1. 5

                                                                                                                                                                                            Interesting, but the suggested remediation strategy seems overly complex? (Or I very well may be missing something…)

                                                                                                                                                                                            When a password reset is initiated, generate timestamp-random-additionalrandom, but only store the first two parts in the database column. The additionalrandom is new (and sent to the user) but never directly touches the database. …. When verifying password reset token, use the first two parts in a SELECT query (as currently implemented), but also re-calculate the SHA256 hash of the entire user-provided string and compare it (using a constant-time compare function) against the value stored in the database. If it doesn’t match, first invalidate the stored token then redirect the user to the login page. This disincentivizes attacks, because you only get one bite at the apple: As soon as you get the correct prefix from a timing leak, unless you guessed additionalrandom correctly, you have to start your attack all over again.

                                                                                                                                                                                            Could you not just… add the user id to the reset link (which is pretty typical ime) & add an expiry time column? Set expiry to Time.now + 15.minutes when generating the token, look up the account by the user ID, and then hard fail the comparison immediately if you’re outside the 15 minute window?

                                                                                                                                                                                            Then you don’t have to worry about how well your database (or in this case, Ruby) implements constant-time comparison functions, because you’ve scoped the attack to a particular user and made the number of requests needed to accomplish it in the timeframe provided essentially guaranteed to melt the server before it could possibly succeed?

                                                                                                                                                                                            You can borrow the idea of wiping the token immediately on a non-match and eliminate the timestamp window, even.

                                                                                                                                                                                            1. 4

                                                                                                                                                                                              Could you not just… add the user id to the reset link (which is pretty typical ime) & add an expiry time column? Set expiry to Time.now + 15.minutes when generating the token, look up the account by the user ID, and then hard fail the comparison immediately if you’re outside the 15 minute window?

                                                                                                                                                                                              There’s already an expiry window, but it’s set to 24 hours not 15 minutes.

                                                                                                                                                                                              A lot of people and companies don’t like leaking database primary keys to the public (for various reasons), so using the user ID in a password reset email would violate their tenets.

                                                                                                                                                                                              1. 3

                                                                                                                                                                                                A lot of people and companies don’t like leaking database primary keys to the public (for various reasons), so using the user ID in a password reset email would violate their tenets.

                                                                                                                                                                                                I’m talking about Lobsters specifically here, and Lobsters is obviously fine with “leaking” the username, so User.find_by_username('soatok') would work just fine. Expire the key if the comparison doesn’t match what’s on the user instance, and you don’t have to mess around with chunking SHA-256 hashes and find constant-time comparison functions, no?

                                                                                                                                                                                            1. 7

                                                                                                                                                                                              git-checkout(1) is truly a very misleading command due to its various possible meanings (same with C static keyword). I hope these commands could make the situation better but I think it might be a lost race as git-checkout is already very established in the workflow of many users.

                                                                                                                                                                                              1. 5

                                                                                                                                                                                                I hope these commands could make the situation better but I think it might be a lost race as git-checkout is already very established in the workflow of many users.

                                                                                                                                                                                                I think framing this as a “race” is kind of a false dichotomy; these commands can improve the ergonomics for people who find checkout confusing. I won’t be switching, but that’s because checkout’s behaviour has always felt intuitive to me, which is why it’s firmly established in my workflow.

                                                                                                                                                                                                The new commands broaden the friendliness of git, they’re not “losing” just because I personally have no need to take them up. It’s not one or the other.

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                Man, how often did I wish when doing database query generation “please $DB, just let me hand you queries in your own internal format, instead of making me write SQL”.

                                                                                                                                                                                                So I agree with the criticism of the author, but as mentioned in the end of the article … what to do with all the knowledge we have now?

                                                                                                                                                                                                It seems that many previous alternative were not successful:

                                                                                                                                                                                                • ORM and “NoSQL” – junior developer ideas that turned out to be worse than using SQL
                                                                                                                                                                                                • GraphQL – lacks joins, so is hardly a credible replacement
                                                                                                                                                                                                • Other promising approaches seem to end up getting commercialized, sold and closed down.

                                                                                                                                                                                                So what can “we” do, to improve the state of the art?

                                                                                                                                                                                                In my opinion: demonstrating and specifying a practical, well-designed¹ language that various databases could implement on their own as an alternative to SQL.


                                                                                                                                                                                                ¹ Not going into that here.

                                                                                                                                                                                                1. 4

                                                                                                                                                                                                  A Datalog variant. I had a lot of fun playing with differential-datalog, and there’s Logica which compiles datalog to SQL for a variety of SQL dialects.

                                                                                                                                                                                                  1. 3

                                                                                                                                                                                                    What do you mean by ORMs and NoSQL being “junior developer ideas”?

                                                                                                                                                                                                    1. 8

                                                                                                                                                                                                      Relational data maps pretty well to most business domains. NoSQL and ORMs throw out the baby with the bathwater for different reasons (turfing the entire model with NoSQL, trying force two different views of modelling the domain to kiss with ORMs). Anything that makes a join hard isn’t a good idea when an RDBMS is involved.

                                                                                                                                                                                                      I think what might be interesting is instead of contorting the RDBMS model to work with OO languages like ORMs do, do the reverse: a relational programming language. I don’t know what that could look like though.

                                                                                                                                                                                                      1. 4

                                                                                                                                                                                                        Relational data maps pretty well to most business domains. NoSQL and ORMs throw out the baby with the bathwater for different reasons (turfing the entire model with NoSQL, trying force two different views of modelling the domain to kiss with ORMs). Anything that makes a join hard isn’t a good idea when an RDBMS is involved.

                                                                                                                                                                                                        Agreed with the conclusion and I have nothing good to say about most NoSQL systems other than that rescuing companies from them is a lucrative career, but I think this criticism of ORMs is over-broad.

                                                                                                                                                                                                        A good ORM will take the scut-work out of database queries in a clean and standardized-across-codebases way without at all getting in your way when accessing deep database features, doing whatever joins you want, etc. I’d throw modern Rails ActiveRecord (without getting in the weeds on Arel) as a good ORM which automates the pointless work while staying out of your way when you want to do something more complicated.

                                                                                                                                                                                                        A bad ORM will definitely try to “hide” the database from you in ways that just make everything way too complicated the second you want to do something as simple as specify a specific type of join. Django’s shockingly awful “QuerySet” ORM definitely falls in this camp, as I’ve recently had the misfortune of trying to make it do fairly simple things.

                                                                                                                                                                                                        1. 3

                                                                                                                                                                                                          I’m very surprised to see ActiveRecord used as an example if something which stays out if your way. The amount of time I have spent fighting to get it to generate the SQL I wanted is why I never use it unless I’m being paid a lot to do so.

                                                                                                                                                                                                          1. 1

                                                                                                                                                                                                            Really? It’s extremely easy to drop to raw SQL, and to intermix that with generated statements – and I’ve done a lot of really custom heavy lifting with it over the years. Admittedly this may not be well documented and I may just be taking advantage of a lot of deep knowledge of the framework, here.

                                                                                                                                                                                                            The contrast is pretty stark to me compared to something like Django, whose devs steadfastly refuse to allow you to specify joins and which, while offering a raw SQL escape hatch, has a different intermediate result type for raw SQL queries (RawQuerySet vs QuerySet) with different methods, meaning details of how you formed a query (raw vs ORM API) leak into all consuming layers and you can’t switch one for the other at the data layer without breaking everything upstream (hilariously the accepted community “solution” to this seems to be to write your raw query, then wrap an ORM API call around it that generates a “select * from (raw query)”??).

                                                                                                                                                                                                            ActiveRecord has none of these issues in my experience – joins can be manually specified, raw clauses inserted, raw SQL is transparent and intermixable with ORM statements with no impedance mismatch. Even aggregation/deaggregation approaches like unions, unnest(), etc that breaks the table-to-class and column-to-property assumptions can still be made to work cleanly. It’s really night and day.

                                                                                                                                                                                                      2. 6

                                                                                                                                                                                                        Not the commenter you’re asking but they’re both tools that reduce initial amount of learning at the cost of abandoning features that make complexity and maintainability easier to handle.

                                                                                                                                                                                                        1. 5

                                                                                                                                                                                                          I’m not sure that that’s true, though. ORMs make a lot of domain logic easier to maintain—it’s not about reducing initial learning, it’s about shifting where you deal with complexity (is it complexity in your domain or in scaling or ???). Similar with NoSQL—it’s not a monolithic thing at all and most of those NoSQL databases require similar upfront learnings (document DBs, graph DBs, etc. all require significant upfront learning to utilize well). Again, it’s a trade off of what supports your use case well.

                                                                                                                                                                                                          I’m just not sure what the GP meant by “junior developer ideas” (it feels disparaging of these, and those who use them, but I won’t jump to conclusions). They also are by no stretch “worse than using SQL”. They are sometimes worse and sometimes better. Tradeoffs.

                                                                                                                                                                                                          1. 2

                                                                                                                                                                                                            I agree with you on the tradeoffs. I’m not sure I agree on the domain logic thing. In my experience orms make things easier until they don’t, in part because you’ve baked your database schema into your code. Sometimes directly generating queries allows changes to happen in the schema without the program needing to change its data model immediately.