Threads for marcellerusu

  1. 11

    I just want to call out, I think the author isn’t talking about dynamic/static typing from the perspective of developing a regular application, but for program analysis.

    I think we’re going through a wave of static analysis, but dynamic analysis like the author is talking about will surely come back as we start to hit the limits of static analysis.

    For me, its not as important if the best tooling exists out there or not, but if I can write some tooling that’s good enough for me. I spent an afternoon a few weeks ago writing a svelte store inspector using Proxy, honestly I’m sure an extension exists that does it better, but I could write something that from the web console I could do stores.user = newUser or stores.user to update/see the existing value at runtime.

    Also something I’ve realized recently in the whole static/dynamic debate, I think we’re usually talking about the wrong thing (just if you have a type checker or not). TypeScript is still a dynamic language in the way that I care because the runtime is dynamic.

    1. 4

      dynamic analysis like the author is talking about will surely come back as we start to hit the limits of static analysis

      This sounds odd to me, as I would expect static analysis to have a higher upper bound than dynamic analysis. What are the limits of static analysis, and what approaches could we take to push dynamic analysis past them? It seems unbelievable that tools for Python could catch as many issues as fbinfer for example. And as we increase the abstraction power of our static types, such as with (liquid) refinement types, dependent types, effect types, etc., it looks like our ability to analyze statically typed languages is only increasing.

      TypeScript is still a dynamic language in the way that I care because the runtime is dynamic.

      Why do you care that the runtime is dynamic? What benefits do you gain from having your data boxed this way? If you want to extend types with new fields, or allow different kinds of structures that all share the same interface without requiring specification, we already have this in statically typed land with row polymorphism & structural typing / duck typing.

      1. 4

        I care that the runtime is dynamic so I can programmatically interact with/inspect the program at runtime, like I was describing with my svelte stores example.

        1. 3

          I’ve gone the dynamic route pretty far for this – having implemented a few runtimes that use that approach – and in my latest stint I ultimately found static types to actually be helpful for the inspection scenario since it gives you places to attach metadata about the shape of your data. For the run-time aspect I found that serializing the app state and then restarting it was way more resilient to code changes than running code in a current VM state (often the VM state becomes something that’s hard to reproduce). This definitely makes more sense if serialization is a core part of your app (which I’ve often found to be the case in my applications – games or tools).

          Bit of an old video but here’s a workflow demonstrating some of this: – directly links to a timestamp where I modify a data structure / metadata on some fields and you can see the results in the UI (while the state of the application remains, including undo / redo history). Earlier in the video at 0:47 you can see the app actually have a hard error due to array out of bounds, but after I fix that the app actually continues from the same point.

          But yeah I get all of this while also getting the static compile benefits of performance etc. in this case (it compiles to C++ and then to wasm. I enable more clang / llvm optimizations when I do a release build (takes longer)).

    1. 17

      All in all I’m being much more careful when researching dependencies and libraries before bringing them in.

      I’ve recently started reading the packages I bring in more and its shocking how low the bar is sometimes, or how insanely abstract & non-idiomatic many of these libraries are, or worst of all a small error-prone wrapper of another library. Often I find I don’t even need to pull it in, there’s the 5-20 important lines of code I really care about & I can copy it in - at least that way I can understand & modify it.

      This wasn’t the example in the article but in my experience NPM seems to follow a variant of unix philosophy, “do 1 thing”, but miss the “do it well”. I found another package we used that wrapped puppeteer to do very simple preset options for it, we had an issue with it & we couldn’t manually upgrade puppeteer or change what is passed to it because this in-between library decided it knew what we wanted.

      One funny thing I’ve noticed, I get a lot of push back for copying in some (unimportant & infrequently modified) code in that is slightly obtuse, but rarely has anyone ever batted an eye when I add a dependency.

      1. 4

        I had a similar experience, and it made me write a blog post: “never use a dependency that you could replace with an afternoon of programming”.

        1. 4

          I would counter that the problem is being wise enough to discern which dependencies really can be reimplemented in an afternoon.

          I would also point out that the “cycle” is not from some hypothetical low-dependency utopia to tons of third-party libraries and back, it’s from NIH syndrome to tons of third-party dependencies and back.

          1. 3

            I would counter that the problem is being wise enough to discern which dependencies really can be reimplemented in an afternoon.

            Thankfully, the cost of answering that question is at most one afternoon.

            1. 3

              From the comments of my post:

              A lot of people are objecting, “What if you estimate wrong, and it takes more than an afternoon?” This objection is bad.

              It is not possible to add a new dependency in less than afternoon because you need to evaluate alternatives, test it, learn how it works, make sure it’s not accidentally GPL, etc. So there are not two methods, the less-than-an-afternoon method and the more-than-an-afternoon method. There are two methods that both take at least one afternoon. If you estimate wrong and you can’t write the code in an afternoon… Then stop working on your handwritten version and find a dependency? But now you know what the dependency is really supposed to do and why it’s non-trivial, so you’re in an even better position to evaluate which ones are good.

              1. 2

                Thankfully, the cost of answering that question is at most one afternoon.

                Or the cost of it is implementing a solution in an afternoon, and then finding oneself the subject of a “Falsehoods programmers believe about…” article because the problem domain turned out to be more complex than an afternoon’s exploration revealed.

                There are no objectively-correct universal stances. Implicitly distrusting/avoiding dependencies is just as wrong as implicitly trusting/embracing them.

                1. 1

                  Strongly disagree here. It is very easy for reimplementing something in an afternoon to appear to be successful, only to cause difficult-to-debug problems further down the line.

                  Knowing when that is likely to be the case is a function of experience and expertise.

                  1. 3

                    I could say the same to you: it is very easy to take on a dependency that appears to solve your problem, only to cause difficult-to-debug problems further down the line.

                    Easier in fact than implementing something in an afternoon and fail to foresee the potential problems: because unlike my own code, I don’t know what’s behind the dependency. So I have to rely on recommendations, reputation, outside appearances. And if it’s not famous I’ll likely need to look up the code itself, and that takes quite a bit of time too.

                    And let’s be honest, if someone implement something in an afternoon, and fail to see the technical debt they’ve just created, they’re not very good. I have little sympathy for tactical tornadoes.

                    1. 1

                      I could say the same to you: it is very easy to take on a dependency that appears to solve your problem, only to cause difficult-to-debug problems further down the line.

                      Sure. I don’t disagree with this at all. I’m not arguing for more dependencies. I don’t have any particular dog in the pro-dependency vs anti-dependency fight.

                      I was only replying to one very specific argument you made which sounds plausible on the surface but is clearly false when given a cursory examination.

          1. 2

            DSLs mean very different things to many people, we should talk about them that way.

            It could be a whole new textual language and its own runtime It could be a whole new textual language that compiles into the base language It could be a macro It could be JS style proxies, or similar ruby libraries It could even just be a library (like xstate)

            All of these are “languages”, in the sense that they have different semantics & sometimes syntax than the language its used alongside.

            I agree with what hwanye said, tooling is what actually makes a DSL good or not. If you have a whole new textual language & there’s no LSP, or if it doesn’t generate good type information for a language like typescript for example, its not going to be a pleasant experience.

            If the DSL requires doing lots of work to convert data between the base language & the DSL, it’s often gonna feel like a pain work with - using SQL with a library that only return tuples for rows feels like this.

            If the DSL’s semantics are complex because it relies on obscure features in the base language its likely going to have poor types, or horrible stack traces it might be difficult.

            These are not small issues, but dear god I would kill for great DSLs in certain places like state machine definitions and I don’t want to go back from using a DSL like svelte for UIs.

            1. 21

              Oh is it time to hype dsls again? That makes sense as we’re starting to all get a little embarrassed about the levels of hype for functional programming.

              I guess next we’ll be hyping up memory safe object oriented programming.

              1. 16

                I’m just sitting here with my Java books waiting for the pendulum to swing back…

                1. 9

                  I’m going to go long on eiffel books.

                  1. 6

                    I think a language heavily inspired by Eiffel, while fixing all of its (many, many) dumb mistakes, could go really far.

                    1. 2

                      I’ve just started learning Eiffel and like what ive seen so far, just curious what do you consider its mistakes?

                      1. 8
                        1. CAT-calling
                        2. Bertrand Meyer’s absolute refusal to use any standard terminology for anything in Eiffel. He calls nulls “voids”, lambdas “agents”, modules “clusters”, etc.
                        3. Also his refusal to adopt any PL innovations past 1995, like all the contortions you have to do to get “void safety” (null safety) instead of just adding some dang sum types.
                      2. 1


                  2. 14

                    I, personally, very much doubt full on OOP will ever come back in the same way it did in the 90s and early 2000s. FP is overhyped by some, but “newer” languages I’ve seen incorporate ideas from FP and explicitly exclude core ideas of OOP (Go, Zig, Rust, etc.).

                    1. 5

                      I mean, all of those languages have a way to do dynamic dispatch (interfaces in Go, trait objects in Rust, vtables in Zig as of 0.10).

                      1. 13

                        And? They also all support first-class functions from FP but nobody calls them FP languages. Inheritance is the biggest thing missing, and for good reason.

                        1. 12

                          This, basically. Single dynamic dispatch is one of the few things from Java-style OO worth keeping. Looking at other classic-OO concepts: inheritance is better off missing most of the time (some will disagree), classes as encapsulation are worse than structs and modules, methods don’t need to be attached to classes or defined all in one batch, everything is not an object inheriting from a root object… did I miss anything?

                          Subtyping separate from inheritance is a useful concept, but from what I’ve seen the world seldom breaks down into such neat categories to make subtyping simple enough to use – unsigned integers are the easiest example. Plus, as far as I can tell it makes most current type system math explode. So, needs more theoretical work before it wiggles back into the mainstream.

                          1. 8

                            I’ve been thinking a lot about when inheritance is actually a good idea, and I think it comes down to two conditions:

                            1. The codebase will instantiate both Parent and Child objects
                            2. Anything that accepts a Parent will have indistinguishable behavior when passed a Child object (LSP).

                            IE a good use of Inheritance is to subclass EventReader with ProfiledEventReader.

                            1. 10

                              Take a cookie from a jar for using both LSP and LSP in a single discussion!

                              1. 4

                                Inheritance can be very useful when it’s decoupled from method dispatch.

                                Emacs mode definitions are a great example. Nary a class nor a method in sight, but the fact that markdown-mode inherits from text-mode etc is fantastically useful!

                                On the other hand, I think it’s fair to say that this is so different from OOP’s definition of inheritance that using the same word for it is just asking for confusion. (I disagree but it’s a reasonable argument.)

                                1. 2

                                  Inheritance works wonderfully in object systems with multiple dispatch, although I’m not qualified to pinpoint what is it that makes them click together.

                                  1. 1

                                    I’ve lately come across a case where inheritance is a Good Idea; if you’re plotting another of your fabulous blog posts on this, I’m happy to chat :)

                                    1. 1

                                      My impression is that inheritance is extremely useful for a peculiar kind of composition, namely open recursion. For example, you write some sort of visitor-like pattern in a virtual class, then inherit it, implement the visit method or what have you, and use this to recurse between the abstract behavior of traversing some structure, and your use-case-specific code. Without recursion you have to basically reimplement a vtable by hand and it sucks.

                                      Well, that’s my only use of inheritance in OCaml. Most of the code is just functions, sum types, records, and modules.

                                      1. 1

                                        Forrest for the trees? When you want to create a framework that has default behaviour that can be changed, extended or overridden?

                                      2. 4
                                        • obj.method syntax for calling functions — a decent idea worth keeping.
                                        • bundling behavior, mutable state, and identity into one package — not worth doing unless you are literally Erlang.
                                        1. 3

                                          IMO there is a fundamental difference between Erlang OO and Java OO to the point that bringing them up in the same conversation is rarely useful. Erlang actively discourages you from having pellets of mutable state scattered around your program: sure, threads are cheap, but that state clump is still a full-blown thread you need to care for. It needs rules on supervision, it needs an API of some kind to communicate, etc, etc. Erlang is at it’s best when you only use threads when you are at a concurrency boundary, and otherwise treat it as purely functional. Java, in contrast, encourages you to make all sorts of objects with mutable state all over the place in your program. I’d wager that MOST non-trivial methods in Java contain the “new” keyword. This results in a program with “marbled” state, which is difficult to reason about, debug, or apply any kind of static analysis to.

                                        2. 2

                                          In all honesty, you sound quite apologetic to what could be arguably considered objectively bad design.

                                          Attaching methods to types essentially boils down to scattering data (state) all over the code and writing non pure functions. Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                                          Almost the same could be said about inheritance. Why would fiting a data model in a unique universal tree be a good idea? Supposedly to implicitly import functionality from parent classes without repeating yourself. Quite a silly way to save a line of code. Specially considering the languages that do it are rather verbose.

                                          1. 5

                                            Why honestly cannot understand how anyone would think this is a good idea. Other than being influenced by trends or cults or group thinking.

                                            Here’s a pro tip that has served me well over many years. Whenever I see millions of otherwise reasonable people doing a thing that is obviously a terribly stupid idea, it is always a lack of understanding on my part about what’s going on. Either I am blind to all of the pros of what they are doing and only see the cons, or what they’re doing is bad at one level but good at a different level in a way that outbalances it, or they are operating under constraints that I don’t see or pretend can be ignored, or something else along those lines.

                                            Billions of lines of successful shipped software have been written in object-oriented languages. Literally trillions of dollars of economic value have been generated by this software. Millions of software developers have spent decades of their careers doing this. The though that they are all under some sort of collective masochistic delusion simply does pass Hanlon’s Razor.

                                            1. 1

                                              To be honest, the more I study OOP (or rather, the hodgepodge of features and mechanisms that are claimed by various groups to be OOP), the less room I see for a genuine advantage.

                                              Except one: instantiation.

                                              Say you have a piece of state, composed of a number of things (say a couple integers, a boolean and a string), that represent some coherent whole (say the state of a lexer). The one weird trick is that instead of letting those be global variables, you put them in a struct. And now you can have several lexers running at the same time, isn’t that amazing?

                                              Don’t laugh, before OOP was popular very prominent people thought it was a good idea to have global state in Lex, Yacc, or error handling (errno). So here’s my current guess: the success we attribute to OOP doesn’t really come from any of its overly hyped features. It comes from a couple very mundane, yet very good programming practices it adopted along the way. People attributed to the hyped stuff (such as inheritance) a success they have earned mostly by avoiding global variables.

                                              Abstract data types are amazing, and used everywhere for decades, including good old C. The rest of OOP though? Contextual at best.

                                              1. 1

                                                It has been the opposite for me.

                                                • typecast everything to and from object in early versions of java
                                                • EJBs 2
                                                • Bower package manager. Its creator wrote on stack overflow that he was confused when he created the project and that it was essentially useless.
                                                • Ruby gems security incident
                                                • Left pad fiasco
                                                • Apache web server htaccess configs

                                                I could go on with more esoteric examples to an ever growing list.

                                                All these had criticism screaming long before they happened: why?

                                              2. 3

                                                Many decisions are only clearly good or bad in retrospect.

                                            2. 6

                                              Inheritance is the biggest thing missing, and for good reason.

                                              That reason being “inheritance was the very first mechanism for subtyping, ADTs, and code-reuse, and people using it got ideas for better mechanisms from it.” ;)

                                              1. 1


                                              2. 3

                                                The first versions of Simula and Smalltalk didn’t have inheritance either. Self and other prototypal object-oriented languages don’t use traditional inheritance either. We still call all of them object-oriented.

                                                Honestly, it’s well beyond time that we retire all programming language paradigm terms. Modern languages simply aren’t organized into paradigms they way older simpler languages were.

                                                It’s like we’re looking at a Honda Accord and arguing over whether it’s a penny farthing or a carriage. The taxonomy no longer makes sense.

                                            3. 1

                                              Ah yes and that’s why it’s ripe to have a come back. :)

                                              Seriously though I expect that the next incarnation will be “oop without inheritance” or something. Probably combined with some large corporation “inventing” gc-less memory management.

                                              1. 2

                                                The good parts of OOP never really left. We already have that exact language: Rust. It has formal interfaces (Traits), encapsulation, polymorphism, and gc-less memory management.

                                                1. 10

                                                  The main thing about OOP that needs to die is the idea that OOP is a coherent concept worth discussing on its own. Talk about the individual concepts as independent things! It’s much more productive.

                                                  1. 1

                                                    Talk about the individual concepts as independent things!

                                                    IMO OOP these days really means inheritance and an object lifecycle. All the other concepts aren’t really unique to OOP.

                                                    1. 3

                                                      I think “OOP” generally means “features of object-oriented languages that I don’t like” to a lot of people. The people using those languages don’t generally get into paradigm arguments.

                                                      (Personally, I consider inheritance to be common in OOP languages but not a particularly interesting or salient part of them. Many early OOP languages didn’t have inheritance and prototypal ones have an entirely different code reuse model.)

                                                      1. 1

                                                        For some people “OOP” means “features of languages I do like”. For instance I’ve seen people include templates/generics/parametric polymorphism and unnamed functions as core parts of OOP… having learned CamlLight (OCaml without the “O”) in college, I confessed I was quite astonished.

                                                      2. 2

                                                        You say that but it means different things to different people. I don’t disagree that your definition would be a good one if you could get people to agree on it, but I can’t assume that when other people say “OOP” that’s what they’re talking about.

                                                2. 1

                                                  I think it will come back, rediscovered as something new by a new generation disillusioned with whatever has been the cool solves-everything paradigm of the previous half decade. Perhaps this time as originally envisaged with a “Scandinavian school” modeling approach.

                                                  Of course it never left as the first choice for one genre of software… the creation of frameworks featuring default behavior that can be overridden, extended or changed.

                                                  Those languages you mention (Go, Zig, Rust) are primarily languages solving problems in the computer and data sciences, computing infrastructure and technical capability spaces. Something is going to be needed to replace or update all those complex aging ignored line-of-business systems.

                                                3. 11

                                                  There isn’t really any need to “hype” DSLs because they’re already widely used in all domains of programming:

                                                  • front end: HTML / CSS / JavaScript, and most JS web frameworks introduce a new DSL (multiple JSX-like languages, Svelte, etc.)
                                                  • back end: a bajillion SQL variants, a bazillion query languages like Redis
                                                  • builds: generating Ninja, generating Make (CMake, Meson, etc.)
                                                    • there at least 10 CI platforms with their own YAML DSLs, with vars, interpolation, control flow, etc.
                                                  • In games: little scripting languages for every popular game
                                                  • Graphics: scene description languages, shader languages
                                                  • Compilers: LLVM has its own TableGen language, languages for describing compiler optimizations and architecture (in the implementation of Go, a famously “not DSL” language), languages for describing VMs (Ruby)
                                                  • Machine Learning: PyTorch, TensorFlow, etc. (these are their own languages, on top of Python)
                                                  • Distributed computing: at least 10 MapReduce-derived frameworks/languages; there are internal DSLs in Scala for example, as well as external ones
                                                  • Mathematics and CS: Coq, Lem, etc.

                                                  All of these categories can be fractally expanded, e.g. I didn’t mention the dozens of languages here: – many of which are commonly used and featured on this site

                                                  If you think you don’t use DSLs, then you’re probably just working on a small part of a system, and ignoring the parts you’re not working on.

                                                  ALL real systems use tons of DSLs. I think the real issue is to mitigate the downsides

                                                  1. 1

                                                    Oh yes but at the same time if you haven’t seen the hype for DSLs then you haven’t spent long enough in the industry to go through that part of the hype cycle. DSLs are what they are and it looks like we might be entering a hype cycle where people want to make them out to be much more.

                                                    1. 3

                                                      I don’t agree, I’ve been in the industry for 20+ years, there are plenty of things more hyped than DSLs (cloud, machine learning, etc.)

                                                      DSLs are accepted standard practice, and widely used, but often poorly understood

                                                      I’m not getting much light from your comments on the subject – you’ve made 2 claims of hype with no examples

                                                      1. 2

                                                        Here’s an example of recent hype

                                                        Here’s some hype from the year 2000

                                                        Arguably the hype for 4GLs was the prior iteration of that specific hype.

                                                        I’m not arguing that DSLs are bad - I’m saying that they’re one of the things on the roster of perfectly good things that periodically get trumpeted as the next big thing that will revolutionize computing. These hype cycles are characterized by attempts to make lots of DSLs when there isn’t a strong need for it or any real payoff to making a language rather than a library.

                                                  2. 4

                                                    I know it might sound a bit controversial, but the way I see it we need to reach a new level of abstraction in order for large-scale software development to be sustainable. Some people might say AI is the way forward, or some other new programming technique. Either way I don’t think we’ll get there by incrementally improving on the paradigms we have—in order to reach the next level we’ll have to drop some baggage on the way up.

                                                    1. 4

                                                      I mean, humans aren’t getting better at groking abstraction, so I don’t know that “new levels of abstraction” are the way forward. Personally, I suspect it means more rigor about the software development process–if you’re building a tall tower, maybe the base shouldn’t be built with a “move fast and break things” mentality.

                                                      1. 3

                                                        Groking abstractions isn’t the problem, at the end of the day abstractions are just making decisions for the users of an abstraction. Over-abstraction is the root of many maintainability woes IMO, the more a programmer knows what’s actually going on underneath the better, but only to the degree that it’s relevant.

                                                      2. 3

                                                        I’ve heard it before. DSLs have their place, and some people love them while others hate them. This is one of a rotating cast of concepts that you’ll eventually see rehyped in 10 years.

                                                    1. 2

                                                      I’m using xstate to model the foundation of our “editor” system (video annotation tools), its got really great tooling you can click “inspect” in the vscode extension the line above the statemachine & it displays an interactive chart in the next page.

                                                      Overall its great & we can model fairly complex logic like streaming very large amounts of data predictably, but the syntax is very hard to read for any non trivial state machine. I tend to have an easier time than most with syntax but even I have to do double takes many times, it feels like there’s lots of room for proper DSLs here to get intentions across better.

                                                      When I showed my team xstate, the response was like “ok so we should use this everything, right?” & I quickly replied, “noo”. For most things there are much simpler+cleaner tools to use, like one of my personal overlooked favourites - URLs.

                                                      URLs have a lot of the desirable traits of state machines, you can isolate functionality to specific URL very easily & nothing else, and links are “state transitions”. You can store data in URLs & have it globally accessible. You can do “time-travel-debugging”, by clicking the “back button” :). URLs have pretty large limitations with data storage, but whenever possible they are pretty amazing. A great common example is storing data-table state.

                                                      1. 3

                                                        A few more things, html has better elements!

                                                        We have proper modals with , a proper date picker with , and expandable components

                                                        Im sure theres a lot more too, but this kind of stuff especially date picker why pages are unnecessarily big & hard to debug w web inspectors

                                                        1. 2

                                                          Yeah, the new dialog elements are remarkable. Also now you can make a carousel pretty much trivially by just using scroll snap CSS.

                                                          1. 1

                                                            I guess I don’t know markdown, my formatting got all messed up

                                                            I elements I was referring to were ‘dialog’ ‘input type=date’ ‘details’

                                                          1. 2

                                                            Vendor based (might be wrong word) package managers

                                                            I want to do something like npm vendor component-library which would add component-library to ./vendor & i can go in there and modify only ./vendor/Input.svelte & somehow git would only track my changes to Input.svelte

                                                            So many times i want to make 1 small edit to a library & i end up doing some major hacks because im too lazy to create & manage a forked repo.

                                                            There’s so many packages that acquire so much bloat due to trying to let each user have their own config, vs easily modifiable packages with tools to manage the diffs.

                                                            I believe this also would make library code easier to read if its created with the intention to be modified & application devs would understand the frameworks better.

                                                            1. 2
                                                              1. 2

                                                                This is cool! I might use this for my component library at work because I’m already making so many global stylistic changes to suit our project.

                                                                The description makes it seem like its only for bug fixes / inconvenience, but I’d want a whole new way of thinking of packages. Technically I think its not far off, but my dream involves packages designed to be modified.

                                                                Imagine how much simpler APIs would be if you were expected to go into /vendor/package-name/variables.ts to set the global options.

                                                                Imagine how much more readable library code would be expected to be if it was expected to be modified by the average user.

                                                                I imagine working with packages that are tailor suited towards 1 thing & my needs at work slightly deviate so I change it to fit my need.

                                                                In the world we live in packages are built to suit every use-case under the sun because the only ways of providing value to most users is providing a flexible API, rather than a solid base that is easily modifiable.

                                                                Clean API & deep understanding of your system at the cost of having to deal with manual upgrades for the components you modify. I’m not sure if the pay off is always worth it, but sometimes it definitely is.

                                                            1. 6

                                                              However, appeals to how e.g. lisps provide some kind of unique advantage in interactive development are wildly overstated (and I say this as a former advocate of such practices).

                                                              Anyone who has used a Forth can confirm that the “person interacting with running program” model is by no means unique to lisps. There are also several other non-lisp languages that have no technical barrier to accomplishing this mode of operation, but typically do not do so solely for cultural reasons; Ruby and Lua in particular come to mind. Then you have systems like Erlang that go far beyond what any lisp can do in terms of rolling out changes to production systems live without a restart, and literally keeping both the old and new version running simultaneously in the same program until the old version is safe to abandon.

                                                              It’s also weird that the “On repl-driven programming” article describes a property of “lisps” that as far as I know, no lisp in the world other than CL supports:

                                                              Define a datatype. I mean a class, a struct, a record type–whatever user-defined type your favorite language supports. Make some instances of it. Write some functions (or methods, or procedures, or whatever) to operate on them. Now change the definition of the type. What happens? Does your language runtime notice that the definition of the type has changed?

                                                              Clojure is notoriously bad at this with the way defrecord instances can persist attached to definitions of the record class that no longer exist, and outdated instantiations will be visually indistinguishable from previous ones but behave differently!

                                                              1. 2

                                                                Anyone who has used a Forth can confirm that the “person interacting with running program” model is by no means unique to lisps. There are also several other non-lisp languages that have no technical barrier to accomplishing this mode of operation, but typically do not do so solely for cultural reasons; Ruby and Lua in particular come to mind.

                                                                I haven’t tracked down the details, but I’ve heard that Wind River managed to make “interacting with a running program” work in C on their VxWorks operating system.

                                                                1. 1

                                                                  The shell has a bundled C interpreter.

                                                                2. 1

                                                                  As a js developer with hot reload (maintaining state), & chrome dev tools to inspect practically everything & even usually be able to pry into component level state either with a extension or knowing a little bit about framework internals.. what does clojurescript / friends have that im missing?

                                                                  1. 2

                                                                    The same workflow for ANY program!

                                                                    (edit: thinking CL, not necessarily ClojureScript)

                                                                    1. 2


                                                                1. 2

                                                                  The compiler time is a real annoyance though. I wish browsers could run typescript while ignoring the types, similar to how python does it. I’m using jsdoc type annotations right now which are awkward enough to not be used often enough …

                                                                  1. 3

                                                                    It’s not quite as nice as native browser support, but ESBuild comes with a typescript type stripper which “compiles” typescript to javascript extremely quickly. It won’t report type errors, but combined with an editor with LSP support and CI doing a full compile with type checks, you’re in a pretty good place.

                                                                    1. 1

                                                                      This might be a possibility in time!

                                                                    1. 9

                                                                      I think you meant to say “destructuring”. Destructing usually means tearing down a value that’s gone out of scope (as in C++)… This confused me at first.

                                                                      And it’s really not the destructuring so much as the splitting. I could rewrite your example in a form that doesn’t use destructuring assignment, where the result of split gets assigned to an array and I return a slice (0..3).

                                                                      1. 2

                                                                        its a popular name in JS

                                                                        I think the author just makes a claim that the syntax provides a false sense of “security”, or type safeness, not sure the term, but I agree. you can do things like

                                                                        let [a, b, c] = []

                                                                        with no runtime errors, I get its the JS way to fail silently, but it would be nice if that did fail.

                                                                        Same is true with objects.

                                                                        1. 4

                                                                          Um … that web page supports my point. It calls it “destructuring” just like I did.

                                                                          JavaScript doesn’t really have that kind of safeness. It doesn’t even care if you pass the wrong number of parameters to a function, so I’m not surprised it doesn’t check the size of an array being destructured. I think TypeScript would produce a compile error in your one-line example, but it couldn’t do anything about the code in the blog post because that’s runtime behavior, and TS doesn’t add runtime checks.

                                                                          1. 1

                                                                            My bad, brain couldn’t read

                                                                        2. 1

                                                                          Whoops, typo. My bad. I’ll get it fixed ASAP.

                                                                        1. 13

                                                                          I find readability is largely something that comes out of familiarity. S-expression based language like the lisps look completely alien, unless that’s what you’re using daily and then the c/java/js syntax is the one looking weird and clunky.

                                                                          Overall, I find myself paying less and less attention to syntax as the years go, and more about the semantics of the languages.

                                                                          One exception are auto formatting tools, which I really appreciate, as it frees us from tedious manual formatting or useless debates.

                                                                          1. 2

                                                                            To a degree I agree, but I believe there are objective claims we can claim about the cognitive load of different syntaxes. I think syntax & semantics are also closely tied, syntax enables/disables semantics in many cases.

                                                                            If your syntax only allows for positional arguments its very easy to argue thats less readable than a language like jakt which requires named arguments to (nearly) every function & constructor.

                                                                            I also think the syntax of the language standard api is part of this convo too. Readability is not the same as “easy to reason about”, its related but distinct, id argue its specifically the amount of cognitive load it takes to be satisfied with understanding an isolated function.

                                                                            That means understanding the basic language semantics, argument order, inferring data structures, keeping track of intermediate values (which gets duplicated x times with recursion). These and more are the trade offs, we can obviously get very skilled at these things but it doesn’t mean all languages are essentially the same.

                                                                            1. 1

                                                                              I agree, I keep wanting to post a parody comment showing a “clearly more readable version” in APL.

                                                                            1. 9

                                                                              I find FP in general at a disadvantage in readability possibly because it works in expressions, not statements

                                                                              In general statements can at least in part be understood separately, expressions tend to make me have to understand the whole thing.

                                                                              It could also be just that haskell/(most fp) code tends to not have enough intermediate variables/functions to explain each chunk but I don’t think thats the only reason. I don’t really understand it but I do find it to be true.

                                                                              Maybe if the add helper function was left in it’d be easier to read the haskell insert, but I’ve read it 5-6 times now & i still cant penetrate it. I’m finding myself having to re-read the definition of Trie many times & i forget the order of arguments for the Map methods so I’m trying to infer it.

                                                                              The code definitely looks & feels “elegant” in a mathematical sense, but i don’t think that means anything for the readability. It just means it has less specific components & more generic… which I’d argue only hurts readability.

                                                                              1. 12

                                                                                I’d put that down to familiarity.

                                                                                Expressions in pure FP are great because there’s no implicit state to think about. Even if you’re in a monadic context, maybe using the State monad even, it’s all encapsulated in the expression. The only thing to track is closure bindings.

                                                                                Imperative statements can be brutal. Like with FP you need to track closure bindings, but these can also change between statements! That’s a major, major source of complexity that most programmers have gotten so used to they don’t even question it.

                                                                                1. 2

                                                                                  If there is one thing that (pure) FP does better, it’s referential transparency, which is the actual guarantee that an expression (which is the only way to express a program in haskell) is readable in isolation, and replaceable by its computed value.

                                                                                  So it’s definitely a “break down complex expression” problem, which could be eased by using the where syntax in haskell, which is a way to name sub-expressions in a larger one.

                                                                                  1. 2

                                                                                    hm, typically “expressions” simply means “statement that has a value”. For example, Python has these “yoda conditionals” that allow you to use if as an expression: x = "wtf" if 1 > 2 else "okay". In Ruby you’re able to use the regular if like that as well, because if isn’t a statement: x = if 1 > 2 then "wtf" else "okay" end. Expressions make code more composable in general, which helps a lot with code-generating code (e.g. macros), which is a big reason Lisps prefer expressions over statements.

                                                                                    Overusing the ability to embed expressions in other expressions makes code less readable, that’s true. But it doesn’t have to be. It’s like overusing operator overloading in C++ or something. When a language is more expressive it also allows for more convoluted code.

                                                                                    For example, using intermediate variables is a choice that you can use in FP code just as in more “imperative” code. Not using variables is just a way to show off and make code unreadable, undebuggable and unmaintainable.

                                                                                    it has less specific components & more generic… which I’d argue only hurts readability.

                                                                                    Fully agree on that one though!

                                                                                    1. 1

                                                                                      I think a way to improve functional programming readability is to use point-free style, or other styles that allow breaking down big expressions into small semantic units that are as easy to understand as statements.

                                                                                      I found OP’s Haskell code a bit obtuse compared to Python. Generally, writing simple statically typed functional code requires a bit more effort.

                                                                                    1. 1

                                                                                      Travelling to Winnipeg to meet my team for the first time!

                                                                                      I’ve also started to work on a new frontend framework based off generators & managed effects, its looking very promising.

                                                                                      1. 5

                                                                                        Having basically written this same article but in 2016, I just don’t think Elm scales like people think it might. I’ve also had two different companies I worked for already where we ran into the limitations and needed hacks so dirty that rewrites were seen as more practical. Need synchronous native code? SoL. Do you have i18n and l10n considerations? There’s no good solution for you. Need a browser API not supported yet by the Elm team, good luck becoming anointed to the boy’s club gets access to solving your real world problems.

                                                                                        I feel Elm is best used as a tool to learn FP and has a lot of things to teach about design/architecture (we see TEA now used as an acronym all over because of the idea’s success), but it’s not the horse you should bet on when you can do TEA-style programming without all of the limitations; there are dozens of options now in many different languages, especially the purely functional languages.

                                                                                        1. 4

                                                                                          Both of the following can be true at the same time:

                                                                                          • Elm has flaws
                                                                                          • Elm scales better than TypeScript

                                                                                          I’d say Elm’s downsides compared to TypeScript are more domain specific (e.g. no native i18n support) whereas TypeScript’s are more structural (e.g. npm vs Elm’s package manager). So how relevant those downsides are to you depends on your use cases.

                                                                                          At NoRedInk we’ve been extremely happy with Elm since 2015, but to be fair, we don’t do any i18n. Maybe if we did we’d be less happy with it.

                                                                                          Vendr is another company with 400K+ LoC Elm in production that powers their whole frontend with Elm, and has for years (they hosted the NYC Elm meetup pre-pandemic), and they use TypeScript on the backend, so they’re very aware of how the two stack up!

                                                                                          1. 3

                                                                                            Maybe “might not scale for you” would be a more accurate phrasing depending on your product requirements. There is a subset of applications, even common ones like SPAs, where Elm be the easiest work with because of runtime being something you don’t need to think about. I still however stand by that that are quite a few sore spots that can be showstoppers for other applications.

                                                                                            I don’t think TypeScript vs. Elm is the only fair comparison though. There are good functional (and even TEA-like) frameworks that compile to JavaScript in PureScript, ReScript, derw, ClojureScript, Scala, F#, Haskell+GHCjs that are also worth considering and could cover those limitations. The package manager was mentioned, and it too has issues: working offline, private repositories, the freedom to host a packages not on Microsoft GitHub, dealing with versioning providing patches packages released for older version versions of the Elm compiler.

                                                                                            If I were put in a position to choose TypeScript or Elm though, even if I had to make some painful workarounds, I would absolutely choose Elm because TypeScript isn’t built for functional ergonomics and it’s by-design type system adherence to the goofiness of JavaScript make it awful and verbose to work with. I also wouldn’t be where I am without having chosen to invest time learning Elm.

                                                                                          2. 3

                                                                                            TEA ?

                                                                                          1. 2

                                                                                            The async generator stuff reminds me of crank. I always thought this was cool, nice to see another approach to this. I’m also a fan of the event handler methods convention here.

                                                                                            1. 3

                                                                                              I opened this post expecting it to be about web components, but after reading it I’m not sure if it might be discussing some completely separate technology with the same name.

                                                                                              However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.

                                                                                              Server-side rendering is the idea that HTML should be generated on the server – the “old-school” approach used by everything from to PHP to Rails. Web components are a natural fit for this because they provide a way for the server to write out a description of the page as HTML elements, without having to generate all the nested <div>s and CSS rules and such.

                                                                                              This person is talking about rendering shadow DOM on the server, though, which … honestly it seems completely insane. I don’t know why you’d ever do that, it’s such a bizarre idea that I feel I must be misunderstanding.

                                                                                              The core flaw of component-oriented architecture is that it couples HTML generation and DOM bindings in a way that cannot be undone. What this means in practice is that: […]

                                                                                              • Your backend must be JavaScript. This decision is made for you.

                                                                                              Just absolutely befuddling. Why would using web components imply the presence of a backend at all, much less require that it be written in JavaScript?

                                                                                              My blog’s UI is written with web components. Its “backend” is a bunch of static HTML being served by nginx. If I wanted to add dynamic functionality and wrote a web component for it, then the client side would only need to know which URLs to hit – the choice of backend language wouldn’t be relevant to the client.

                                                                                              I’m building my version of this vision with Corset. I look forward to seeing what other solutions arise.

                                                                                              Oh, ok, this is some sort of spamvertising. I looked at the Corset website and it looks like it’s a JavaScript library for using CSS to attach event handlers to HTML elements. The value proposition over … well, anything else … isn’t clear to me. Why would I want to write event handlers in a CSS-like custom language? Even the most basic JS framework is better than this.

                                                                                              1. 5

                                                                                                I re-read the article to see if the author was confused about “Web Components” vs. “components on the web”, and the answer is no. The author links to a WC library they wrote that mimics React, showing familiarity with both kinds of C/components. If you read closely, the terminology is consistent, and “web component” is used any time the author means using the customElement and Shadow DOM APIs, but “component” is used other times for the general idea of “something like React”. Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing, but the author is not a victim of this confusion.

                                                                                                It’s a straightforward argument:

                                                                                                However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.

                                                                                                “Server-rendered” in this case means the server sends you a webpage with the final DOM. Your blog is not server rendered. If you disable JS, most of the content on your blog goes away. This is a standard terminology in JS-land, but it’s a bit odd, since all webpages need to come from some server somewhere where they are “rendered” in some sense, but “rendering” in JS-land means specifically inflating to the final DOM.

                                                                                                Your backend must be JavaScript. This decision is made for you.

                                                                                                That follows from the idea that you want to have <blog-layout> on the server turn into <div style="height: 100%"><div id="container">… and also have the <blog-layout> tag work in the browser. In theory, you could do something else with like WASM or recompiling templates or something, but so far no one has figured out an alternative that works as more than a proof of concept.

                                                                                                Oh, ok, this is some sort of spamvertising

                                                                                                Literally everything posted on Lobsters that does not have the “history” tag is spamvertising in this sense. It’s a free JS framework, and yes, the author is promoting it because they think it’s good.

                                                                                                I find the idea of a CSS-like declarative language interesting, but looking at the code examples, I still prefer Alpine.js which also has a declarative language but sprinkled in as HTML attributes. I’m glad someone is looking at new ideas though, and I hope the “write a language like CSS” idea sparks something worth using.

                                                                                                1. 2

                                                                                                  I’m still horribly confused even after your elucidations. Granted, I’ve never used Web Components, but I’ve read the full MDN docs recently.

                                                                                                  The bit about “your backend must be JavaScript” confuses me the most. Why? My server can generate HTML with any template engine in any language. The HTML includes my custom component tags like blog-layout, entry-header, etc. At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?

                                                                                                  1. 3

                                                                                                    At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?

                                                                                                    I think this is the problem the author points to, if you turn off JS, you don’t have those tags anymore.

                                                                                                    1. 1

                                                                                                      No, the problem has to do with requiring JS on the server.

                                                                                                      I’m, frankly, uninterested in what happens if someone turns off JS in their browser. I imagine that, unsurprisingly, a lot of stuff stops working; quel horreur!. Then they can go use Gemini or Gopher or a TTY-based BBS or read a book or something.

                                                                                                      1. 2

                                                                                                        It is both problems. The first view is blocked until JS loads, which means it is impossible to load the page in under one second. To remove the requirement that JS has loaded on the client you pre-render it on the server, but pre-rendering requires JS on the server (Node, Bun, or Deno).

                                                                                                        I love you guys but this is a very old and well known topic in frontend. It’s okay to be backend specialists, but it’s not a confusing post at all. It’s just written for an audience that is part of an on going conversation.

                                                                                                        1. 2

                                                                                                          Fair, frankly I had hard time deciphering the blog post.

                                                                                                          I agree though that optimizing for no JS is not interesting to me either.

                                                                                                      2. 2

                                                                                                        It’s important to know what the competition to WC is doing. Popular JS frameworks like Next for React and Nuxt for Vue and Svelte Kit for Svelte etc. let you write code that works both server side and client side. So if you write <NumberShower favorite="1" /> in Vue on server, the server sends <div class="mycomponent-123abc">My favorite number is <mark class="mycomponent-456def">1</mark>.</div> to the browser, so that the page will load even with JS disabled. Obviously, if JS is turned off, then interactions can’t work, but the real benefit is it dramatically speeds up time to first render and lets some of the JS load in the background while displaying the first round of HTML. (See Everyone has JavaScript, right?.)

                                                                                                        To do this with WC, you might write <number-shower favorite="1"></number-shower>, but there’s no good way to turn it into first render HTML. Basically the only realistic option is to run a headless browser (!) and scrape the output and send that. Even if you do all that, you would still have problems with “rehydration,” where you want the number-shower to also work on the client side, say if you dynamically changed favorite to be 2 on click.

                                                                                                        The Cloak solution is you just write <div class="number-shower">My favorite number is <mark class="number">1</mark>.</div> in your normal server side templating language, and then you describe it with a CSS-like language that says on click, change 1 to 2. Alpine.js and Stimulus work more or less the same way, but use HTML attributes to tell it to change 1 to 2 on click.

                                                                                                        1. 1

                                                                                                          To do this with WC, you might write <number-shower favorite="1"></number-shower>, but there’s no good way to turn it into first render HTML […] The Cloak solution is you just write <div class="number-shower">My favorite number is <mark class="number">1</mark>.</div> in your normal server side templating language

                                                                                                          It’s possible I’m still misunderstanding, but I think you’ve got something weird going on in how you expect web components to be used here. They’re not like React where you define the entire input via attributes. The web component version would be:

                                                                                                          <number-shower>My favorite number is <mark slot=number>1</mark>.</number-shower>

                                                                                                          And then when the user’s browser renders it, if they have no JS, then it renders the same as if the custom elements were all replaced by <div>. It’s not super pretty unless there’s additional stylesheets, but it’s readable as a plain HTML page. Go to my blog in Firefox/Chrome with JS disabled, or heck use a command-line browser like w3m.

                                                                                                          1. 1

                                                                                                            They’re not like React where you define the entire input via attributes.

                                                                                                            No, that’s totally a thing in Web Components. It’s a little tricky though because the attributes behave different if you set them in HTML vs if you set them on a JS DOM Node. You use the lifecycle callbacks to have attributeChangedCallback called whenever someone does el.favorite = "2".

                                                                                                          2. 1

                                                                                                            I just saw Happy DOM, which can prerender Web Components server side without a headless browser. Cool! Still requires server side JS though, and there’s still a big question mark around rehydration.

                                                                                                          3. 1

                                                                                                            The keyword is “isomorphic”, which in “JS land” means the exact same code is used to render HTML on the server side and on the client side. The only language (ignoring WASM) they can both run is JavaScript.

                                                                                                            1. 1

                                                                                                              So their point is “if you want to use the same code on client and server it has to be JS”? Sounds like an oxymoron. But what does that have to do with Web Components?

                                                                                                              1. 2

                                                                                                                If you want to be isomorphic, you can’t use Web Components™, because they are HTML elements designed to work inside a browser. However, you can use web components (lowercase), e.g., React components, because they are just a bunch of JavaScript code that generates HTML and can run anywhere.

                                                                                                          4. 2

                                                                                                            Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing

                                                                                                            Have they? I’m only familiar with the React ecosystem, but I can’t recall ever seeing React components referred to as “web components”.

                                                                                                            1. 1

                                                                                                              I’m saying “Web Components” should be called “Custom Element components” because “Web Components” is an intentionally confusing name.

                                                                                                              1. 2

                                                                                                                Oh I see, yeah, a less generic name than “Web Components” would have been a good idea.

                                                                                                                1. 1

                                                                                                                  To be fair, Web Components as a name goes back to 2011, which is after Angular but before React.

                                                                                                            2. 1

                                                                                                              If you disable JS, most of the content on your blog goes away.

                                                                                                              Did you try it, or are you just assuming that’s how it would work? Because that’s not true. Here’s a screenshot of a post with JS turned off:

                                                                                                              “Server-rendered” in this case means the server sends you a webpage with the final DOM.

                                                                                                              What do you mean by “final DOM”?

                                                                                                              In the cast of my blog, the server does send the final DOM. That static DOM contains elements that are defined by JS, not by the browser, but the DOM itself is whatever comes on the wire.

                                                                                                              Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition. Browsers can and do define some HTML standard elements in terms of other elements, so what looks like a <button> might turn into its own miniature DOM tree. You’d have to exclude any page with interactive elements such as <video>, regardless of whether it used JavaScript at all.

                                                                                                              That follows from the idea that you want to have <blog-layout> on the server turn into <div style="height: 100%"><div id="container">

                                                                                                              If I wanted to do server-side templating then there’s a variety of mature libraries available. I’ve written websites with server-generated HTML in dozens of languages including C, C#, Java, Python, Ruby, Haskell, Go, JavaScript, and Rust – it’s an extremely well-paved path. Some of them are even DOM-oriented, like Genshi.

                                                                                                              The point of web components is that it acts like CSS. When I write a CSS stylesheet, I don’t have to pre-compute it against the HTML on the server and then send markup with <div style="height: 100%"> – I just send the stylesheet and the client handles the style processing. Web components serves the same purpose, but expands beyond what CSS’s declarative syntax can express.

                                                                                                              1. 2

                                                                                                                Did you try it

                                                                                                                Yes, actually. I tried it and got the same result as the screenshot, which is that all the sidebars and whatnot are gone and the page is not very readable. Whether you define that as “most of the content” is sort of a semantic point. Anyhow, it’s fine! There’s no reason you should care about it! But tools like Next and SvelteKit do care and do work hard to solve this problem.

                                                                                                                What do you mean by “final DOM”?

                                                                                                                Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition.

                                                                                                                The DOM that the browser has after it runs all the JavaScript. “Server Side Rendering” is the buzzword to search for. It’s a well known thing in frontend circles. There are a million things to read, but maybe try first. The point about the browser having its own secret shadow DOMs for things like videos and iframes is true, but not really relevant. The point is, does the HTML that comes over the wire parse into the same DOM as the DOM that results after running JS. People have gone to a lot of effort to make them match.

                                                                                                                If I wanted to do server-side templating then there’s a variety of mature libraries available.

                                                                                                                Sure! But some people want to use the same templates on the server and the client (“isomorphism”) but they don’t want to use JavaScript on the server. That’s a hard problem to solve and things like Corset, Stimulus, and Alpine.js are working on it from one angle. Another angle is to just not use client side templating, and do the Phoenix LiveView thing. It’s a big space with tons of experiments going on.

                                                                                                            3. 1

                                                                                                              My blog’s UI is written with web components.

                                                                                                                      <h2 slot=title>

                                                                                                              Looks cool. Can you tell more?

                                                                                                              1. 1

                                                                                                                Anything specific you’re interested in knowing?

                                                                                                                I use a little shim library named Lit, which provides a React-style wrapper around web component APIs. The programmer only has to define little chunks of functionality and then wire them up with HTML. If you’ve ever used an XML-style UI builder like Glade, the dev experience is very similar.

                                                                                                                After porting my blog from server-templated HTML to web components I wanted to reuse some of them in other projects, so I threw together a component library (I think the modern term is “design system”?). It’s called Yuru UI because the pun was irresistible.

                                                                                                                The <yuru-*> components are all pretty simple, so a more interesting example might be <blog-tableofcontents>. This element dynamically extracts section headers from the current page and renders a ToC:

                                                                                                                import { LitElement, html, css } from "lit";
                                                                                                                import type { TemplateResult } from "lit";
                                                                                                                import { repeat } from "lit/directives/repeat.js";
                                                                                                                class BlogTableOfContents extends LitElement {
                                                                                                                	private _sections: NodeListOf<HTMLElement> | null;
                                                                                                                	static properties = {
                                                                                                                		_sections: { state: true },
                                                                                                                	static styles = css`
                                                                                                                        :host {
                                                                                                                            display: inline-block;
                                                                                                                            border: 1px solid black;
                                                                                                                            margin: 0 1em 1em 0;
                                                                                                                            padding: 1em 1em 1em 0;
                                                                                                                        a { text-decoration: none; }
                                                                                                                        ul {
                                                                                                                            margin: 0;
                                                                                                                            padding: 0 0 0 1em;
                                                                                                                            line-height: 150%;
                                                                                                                            list-style-type: none;
                                                                                                                	constructor() {
                                                                                                                		this._sections = null;
                                                                                                                		(new MutationObserver(() => {
                                                                                                                			this._sections = document.querySelectorAll("blog-section");
                                                                                                                		})).observe(document, {
                                                                                                                			childList: true,
                                                                                                                			subtree: true,
                                                                                                                			characterData: true,
                                                                                                                	render() {
                                                                                                                		const sections = this._sections;
                                                                                                                		if (sections === null || sections.length === 0) {
                                                                                                                			return "";
                                                                                                                		return html`${sectionList(sections)}`;
                                                                                                                customElements.define("blog-tableofcontents", BlogTableOfContents);
                                                                                                                const keyID = (x: HTMLElement) =>;
                                                                                                                function sectionList(sections: NodeListOf<HTMLElement>) {
                                                                                                                	let tree: any = {};
                                                                                                                	let tops: HTMLElement[] = [];
                                                                                                                	sections.forEach((section) => {
                                                                                                                		tree[] = {
                                                                                                                			element: section,
                                                                                                                			children: [],
                                                                                                                		const parent = (section.parentNode! as HTMLElement).closest("blog-section");
                                                                                                                		if (parent) {
                                                                                                                		} else {
                                                                                                                	function sectionTemplate(section: HTMLElement): TemplateResult | null {
                                                                                                                		const header = section.querySelector("h1,h2,h3,h4,h5,h6");
                                                                                                                		if (header === null) {
                                                                                                                			return null;
                                                                                                                		const children = tree[].children;
                                                                                                                		let childList = null;
                                                                                                                		if (children.length > 0) {
                                                                                                                			childList = html`<ul>${repeat(children, keyID, sectionTemplate)}</ul>`;
                                                                                                                		return html`<li><a href="#${}">${header.textContent}</a>${childList}</li>`;
                                                                                                                	return html`<ul>${repeat(tops, keyID, sectionTemplate)}</ul>`

                                                                                                                I’m sure a professional web developer would do a better job, but I mostly do backend development, and of my UI experience maybe half is in native GUI applications (Gtk or Qt). Trying to get CSS to render something even close to acceptable can take me days.

                                                                                                                That’s why I love web components so much, each element has its own little mini-DOM with scoped CSS. I can just sort of treat them like custom GUI widgets without worrying about whether adjusting a margin is going to make something else on the page turn purple.

                                                                                                            1. 9

                                                                                                              The thread on LKML about this work really doesn’t portray the Linux community in a good light. With a dozen or so new kernels being written in Rust, I wouldn’t be surprised if this team gives up dealing with Linus and goes to work on adding good Linux ABI compatibility to something else.

                                                                                                              1. 26

                                                                                                                I dunno, Linus’ arguments make a lot of sense to me. It sounds like he’s trying to hammer some realism into the idealists. The easter bunny and santa claus comment was a bit much, but otherwise he sounds quite reasonable.

                                                                                                                1. 19

                                                                                                                  Disagreement is over whether “panic and stop” is appropriate for kernel, and here I think Linus is just wrong. Debugging can be done by panic handlers, there is just no need to continue.

                                                                                                                  Pierre Krieger said it much better, so I will quote:

                                                                                                                  Part of the reasons why I wrote a kernel is to confirm by experience (as I couldn’t be sure before) that “panic and stop” is a completely valid way to handle Rust panics even in the kernel, and “warn and continue” is extremely harmful. I’m just so so tired of the defensive programming ideology: “we can’t prevent logic errors therefore the program must be able to continue even a logic error happens”. That’s why my Linux logs are full of stupid warnings that everyone ignores and that everything is buggy.

                                                                                                                  One argument I can accept is that this should be a separate discussion, and Rust patch should follow Linux rule as it stands, however stupid it may be.

                                                                                                                  1. 7

                                                                                                                    I think the disagreement is more about “should we have APIs that hide the kernel context from the programmer” (e.g. “am I in a critical region”).

                                                                                                                    This message made some sense to me:

                                                                                                                    Linus’ writing style has always been kind of hyperbolic/polemic and I don’t anticipate that changing :( But then again I’m amazed that Rust-in-Linux happened at all, so maybe I should allow for the possibility that Linus will surprise me.

                                                                                                                    1. 1

                                                                                                                      This is exactly what I still don’t understand in this discussion. Is there something about stack unwinding and catching the panic that is fundamentally problematic in, eg a driver?

                                                                                                                      It actually seems like it would be so much better. It recovers some of the resiliency of a microkernel without giving up the performance benefits of a monolithic kernel.

                                                                                                                      What if, on an irrecoverable error, the graphics driver just panicked, caught the panic at some near-top-level entry point, reset to some known good state and continued? Seems like such an improvement.

                                                                                                                      1. 5

                                                                                                                        I don’t believe the Linux kernel has a stack unwinder. I had an intern add one to the FreeBSD kernel a few years ago, but never upstreamed it (*NIX kernel programmers generally don’t want it). Kernel stack traces are generated by following frame-pointer chains and are best-effort debugging things, not required for correctness. The Windows kernel has full SEH support and uses it for all sorts of things (for example, if you try to access userspace memory and it faults, you get an exception, whereas in Linux or FreeBSD you use a copy-in or copy-out function to do the access and check the result).

                                                                                                                        The risk with stack unwinding in a context like this is that the stack unwinder trusts the contents of the stack. If you’re hitting a bug because of stack corruption then the stack unwinder can propagate that corruption elsewhere.

                                                                                                                        1. 1

                                                                                                                          With the objtool/ORC stuff that went into Linux as part of the live-patching work a while back it does actually have a (reliable) stack unwinder:

                                                                                                                          1. 2

                                                                                                                            That’s fascinating. I’m not sure how it actually works for unwinding (rather than walking) the stack: It seems to discard the information about the location of registers other than the stack pointer, so I don’t see how it can restore callee-save registers that are spilled to the stack. This is necessary if you want to resume execution (unless you have a setjmp-like mechanism at the catch site, which adds a lot of overhead).

                                                                                                                            1. 2

                                                                                                                              Ah, a terminological misunderstanding then I think – I hadn’t realized you meant “unwinding” specifically as something sophisticated enough to allow resuming execution after popping some number of frames off the stack; I had assumed you just meant traversal of the active frames on the stack, and I think that’s how the linked article used the term as well (though re-reading your comment now I realize it makes more sense in the way you meant it).

                                                                                                                              Since AFAIK it’s just to guarantee accurate stack backtraces for determining livepatch safety I don’t think the objtool/ORC functionality in the Linux kernel supports unwinding in your sense – I don’t know of anything in Linux that would make use of it, aside from maybe userspace memory accesses (though those use a separate ‘extable’ mechanism for explicitly-marked points in the code that might generate exceptions, e.g. this).

                                                                                                                              1. 2

                                                                                                                                If I understand the userspace access things correctly, they look like the same mechanism as FreeBSD (no stack unwinding, just quick resumption to an error handler if you fault on the access).

                                                                                                                                I was quite surprised that the ORC[1] is bigger than DWARF. Usually DWARF debug info can get away with being large because it’s stored in separate pages in the binary from the file and so doesn’t consume any physical memory unless used. I guess speed does matter for things like DTrace / SystemTap probes, where you want to do a full stack trace quickly, but in the kernel you can’t easily lazily load the code.

                                                                                                                                The NT kernel has some really nice properties here. Almost all of the kernel’s memory (including the kernel’s code) is pageable. This means that the kernel’s unwind metadata can be swapped out if not in use, except for the small bits needed for the page-fault logic. In Windows, the metadata for paged-out pages is stored in PTEs and so you can even page out page-table pages, but you can then potentially need to page in every page in a page-table walk to handle a userspace fault. That extreme case probably mattered a lot more when 16 MiB of RAM was a lot for a workstation than it does now, but being able to page out rarely-used bits of kernel is quite useful.

                                                                                                                                In addition, the NT kernel has a complete SEH unwinder and so can easily throw exceptions. The SEH exception model is a lot nicer than the Itanium model for in-kernel use. The Itanium C++ ABI allocates exceptions and unwind state on the heap and then does a stack walk, popping frames off to get to handlers. The SEH model allocates them on the stack and then runs each cleanup frame, in turn, on the top of the stack then, at catch, runs some code on top of the stack before popping off all of the remaining frames[2]. This lets you use exceptions to handle out-of-memory conditions (though not out-of-stack-space conditions) reliably.

                                                                                                                                [1] Such a confusing acronym in this context, given that the modern LLVM JIT is also called ORC.

                                                                                                                                [2] There are some comments in the SEH code that suggest that it’s flexible enough to support the complete set of Common Lisp exception models, though I don’t know if anyone has ever taken advantage of this. The Itanium ABI can’t support resumable exceptions and needs some hoop jumping for restartable ones.

                                                                                                                        2. 4

                                                                                                                          What you are missing is that stack unwinding requires destructors, for example to unlock locks you locked. It does work fine for Rust kernels, but not for Linux.

                                                                                                                      2. 7

                                                                                                                        Does the kernel have unprotected memory and just rolls with things like null pointer dereferences reading garbage data?

                                                                                                                        For errors that are expected Rust uses Result, and in that case it’s easy to sprinkle the code with result.or(whoopsie_fallback) that does not panic.

                                                                                                                        1. 4

                                                                                                                          As far as I understand, yeah, sometimes the kernel would prefer to roll with corrupted memory as far as possible:

                                                                                                                          So BUG_ON() is basically ALWAYS 100% the wrong thing to do. The argument that “there could be memory corruption” is [not applicable in this context]. See above why.

                                                                                                                          (from docs and linked mail).

                                                                                                                          null derefernces in particular though usually do what BUG_ON essentially does.

                                                                                                                          And things like out-of-bounds accesses seem to end with null-dereference:


                                                                                                                          Though, notably, out-of-bounds access doesn’t immediately crash the thing.

                                                                                                                          1. 8

                                                                                                                            As far as I understand, yeah, sometimes the kernel would prefer to roll with corrupted memory as far as possible:

                                                                                                                            That’s what I got from the thread and I don’t understand the attitude at all. Once you’ve detected memory corruption then there is nothing that a kernel can do safely and anything that it does risks propagating the corruption to persistent storage and destroying the user’s data.

                                                                                                                            Linus is also wrong that there’s nothing outside of a kernel that can handle this kind of failure. Modern hardware lets you make it very difficult to accidentally modify the kernel page tables. As I recall, XNU removes all of the pages containing kernel code from the direct map and protects the kernel’s page tables from modification, so that unrecoverable errors can take an interrupt vector to some immutable code that can then write crash dumps or telemetry and reboot. Windows does this from the Secure Kernel, which is effectively a separate VM that has access to all of the main VM’s memory but which is protected from it. On Android, Halfnium provides this kind of abstraction.

                                                                                                                            I read that entire thread as Linus asserting that the way that Linux does things is the only way that kernel programming can possibly work, ignoring the fact that other kernels use different idioms that are significantly better.

                                                                                                                            1. 4

                                                                                                                              Reading this thread is a little difficult because the discussion is evenly spread between the patch set being proposed, some hypothetical plans for further patch sets, and some existing bad blood between the Linux and Rust community.

                                                                                                                              The “roll with corrupted memory as far as possible” part is probably a case of the “bad blood” part. Linux is way more permissive with this than it ought to be but this is probably about something else.

                                                                                                                              The initial Rust support patch set failed very eagerly and panicked, including on cases where it really is legit not to panic, like when failing to allocate some memory in a driver initialization code. Obviously, the Linux idiom there isn’t “go on with whatever junk pointer kmalloc gives you there” – you (hopefully – and this is why we should really root for memory safety, because “hopefully” shouldn’t be a part of this!) bail out, that driver’s initialization fails but kernel execution obviously continues, as it probably does on just about every general-purpose kernel out there.

                                                                                                                              The patchset’s authors actually clarified immediately that the eager panics are actually just an artefact of the early development status – an alloc implementation (and some bits of std) that follows safe kernel idioms was needed, but it was a ton of work so it was scheduled for later, as it really wasn’t relevant for a first proof of concept – which was actually a very sane approach.

                                                                                                                              However, that didn’t stop seemingly half the Rustaceans on Twitter to take out their pitchforks, insists that you should absolutely fail hard if memory allocation fails because what else are you going to do, and rant about how Linux is unsafe and it’s riddled with security bugs because it’s written by obsolete monkeys from the nineties whose approach to memory allocation failures is “well, what could go wrong?” . Which is really not the case, and it really does ignore how much work went into bolting the limited memory safety guarantees that Linux offers on as many systems as it does, while continuing to run critical applications.

                                                                                                                              So when someone mentions Rust’s safety guarantees, even in hypothetical cases, there’s a knee-jerk reaction for some folks on the LKML to feel like this is gonna be one of those cases of someone shitting on their work.

                                                                                                                              I don’t want to defend it, it’s absolutely the wrong thing to do and I think experienced developers like Linus should realize there’s a difference between programmers actually trying to use Rust for real-world problems (like Linux), and Rust advocates for whom everything falls under either “Rust excels at this” or “this is an irrelevant niche case”. This is not a low-effort patch, lots of thinking went into it, and there’s bound to be some impedance mismatch between a safe language that tries to offer compile-time guarantees and a kernel historically built on overcoming compiler permisiveness through idioms and well-chosen runtime tradeoffs. I don’t think the Linux kernel folks are dealing with this the way they ought to be dealing with it, I just want to offer an interpretation key :-D.

                                                                                                                          2. 1

                                                                                                                            No expert here, but I imagine linux kernel has methods of handling expected errors & null checks.

                                                                                                                          3. 6

                                                                                                                            In an ideal world we could have panic and stop in the kernel. But what the kernel does now is what people expect. It’s very hard to make such a sweeping change.

                                                                                                                            1. 6

                                                                                                                              Sorry, this is a tangent, but your phrasing took me back to one of my favorite webcomics, A Miracle of Science, where mad scientists suffer from a “memetic disease” that causes them to e.g. monologue and explain their plans (and other cliches), but also allows them to make impossible scientific breakthroughs.

                                                                                                                              One sign that someone may be suffering from Science Related Memetic Disorder is the phrase “in a perfect world”. It’s never clearly stated exactly why mad scientists tend to say this, but I’d speculate it’s because in their pursuit of their utopian visions, they make compromises (ethical, ugly hacks to technology, etc.), that they wouldn’t have to make in “a perfect world”, and this annoys them. Perhaps it drives them to take over the world and make things “perfect”.

                                                                                                                              So I have to ask… are you a mad scientist?

                                                                                                                              1. 2

                                                                                                                                I aspire to be? bwahahaa

                                                                                                                                1. 2

                                                                                                                                  Hah, thanks for introducing me to that comic! I ended up archive-bingeing it.

                                                                                                                                2. 2

                                                                                                                                  What modern kernels use “panic and stop”? Is it a feature of the BSDs?

                                                                                                                                  1. 8

                                                                                                                                    Every kernel except Linux.

                                                                                                                                    1. 2

                                                                                                                                      I didn’t exactly mean bsd. And I can’t name one. But verified ones? redox?

                                                                                                                                      1. 1

                                                                                                                                        I’m sorry if my question came off as curt or snide, I was asking out of genuine ignorance. I don’t know much about kernels at this level.

                                                                                                                                        I was wondering how much an outlier the Linux kernel is - @4ad ’s comment suggests it is.

                                                                                                                                        1. 2

                                                                                                                                          No harm done

                                                                                                                                  2. 4

                                                                                                                                    I agree. I would be very worried if people writing the Linux kernel adopted the “if it compiles it works” mindset.

                                                                                                                                    1. 2

                                                                                                                                      Maybe I’m missing some context, but it looks like Linus is replying to “we don’t want to invoke undefined behavior” with “panicking is bad”, which makes it seem like irrelevant grandstanding.

                                                                                                                                      1. 2

                                                                                                                                        The part about debugging specifically makes sense in the “cultural” context of Linux, but it’s not a matter of realism. There were several attempts to get “real” in-kernel debugging support in Linux. None of them really gained much traction, because none of them really worked (as in, reliably, for enough people, and without involving ritual sacrifices), so people sort of begrudgingly settled for debugging by printf and logging unless you really can’t do it otherwise. Realistically, there are kernels that do “panic and stop” well and are very debuggable.

                                                                                                                                        Also realistically, though: Linux is not one of those kernels, and it doesn’t quite have the right architecture for it, either, so backporting one of these approaches onto it is unlikely to be practical. Linus’ arguments are correct in this context but only insofar as they apply to Linux, this isn’t a case of hammering realism into idealists. The idealists didn’t divine this thing in some programming class that only used pen, paper and algebra, they saw other operating systems doing it.

                                                                                                                                        That being said, I do think people in the Rust advocacy circles really underestimate how difficult it is to get this working well for a production kernel. Implementing panic handling and a barebones in-kernel debugger that can nonetheless usefully handle 99% of the crashes in a tiny microkernel is something you can walk third-year students through. Implementing a useful in-kernel debugger that can reliably debug failures in any context, on NUMA hardware of various architectures, even on a tiny, elegant microkernel, is a whole other story. Pointing out that there are Rust kernels that do it well (Redshirt comes to mind) isn’t very productive. I suspect most people already know it’s possible, since e.g. Solaris did it well, years ago. But the kind of work that went into that, on every level of the kernel, not just the debugging end, is mind-blowing.

                                                                                                                                        (Edit: I also suspect this is the usual Rust cultural barrier at work here. The Linux kernel community is absolutely bad at welcoming new contributors. New Rust contributors are also really bad at making themselves welcome. Entertaining the remote theoretical possibility that, unlikely though it might be, it is nonetheless in the realm of physical possibility that you may have to bend your technology around some problems, rather than bending the problems around your technology, or even, God forbid, that you might be wrong about something, can take you a very long way outside a fan bulletin board.)

                                                                                                                                        1. 1

                                                                                                                                          easter bunny and santa claus comment

                                                                                                                                          Wow, Linus really has mellowed over the years ;)

                                                                                                                                      1. 5

                                                                                                                                        I started writing a basic interpreter in rust to learn the language last week, im hoping to have function stacks/context this week.

                                                                                                                                        Right now it only supports i32 & math operators as expression which makes it easier to focus on figuring out functions & all that.