1. 2

    Back in the 00’s I printed out copies of _why’s guide to use as a textbook for teaching kids programming. It’s the most successful literature I’ve ever used for this purpose.

    1. 4

      Trying to wrap my head around Lisp macros. I have long had a misconception that that Lisp-2s existed as a weird compromise to allow macros to still be fairly useful in the face of lexical scoping. But I have recently seen evidence that, in fact, the opposite is true, and it is much easier to write correctly behaved macros in a Lisp-1.

      I am not a Lisp person, so I’m coming at this pretty blind. I’ve been reading papers about the history of Lisp, and trying to understand where my misconception came from. So far I’ve seen this claim repeated in a few places, but nowhere that includes an example of the “right” way to reconcile lexical scope and quasiquoting. So I have a lot more reading to do…

      1. 1

        This really doesn’t have anything to do with Lisp-1 vs Lisp 2 so much as it has to do with hygienic vs non-hygienic macros. Your misconception might stem from the fact that the most common Lisp 2 also has a non-hygienic macro system and the most common Lisp-1 (Scheme) tends to have hygienic macro systems. I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments. Typical programs don’t introduce a lot of function bindings except at top or package level.

        1. 2

          This is a very reasonable assumption, but in this case I was only thinking about “classic” quasiquote-style macros, and how they differ in Lisp-1s and Lisp-2s.

          I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments.

          Yeah, that matches my prior assumption. I was very surprised when I learned how a modern Lisp-1 with quasiquote handles the function capture problem – far more elegantly than the separate function namespace. Then I learned that Common Lisp can do the same thing (in a much more ugly way), and I was very surprised that it is not just the canonical way to deal with unhygienic macros. Now it seems like more of a historical accident that Lisp-2 are considered (by some people) “better” for writing unhygienic macros than Lisp-1s.

          I’m probably not explaining this well. I ended up writing a blog post about my findings that is very long, but does a better job of explaining my misunderstanding.

          https://ianthehenry.com/posts/janet-game/the-problem-with-macros/

        2. 1

          Have you had a look at Common Lisp yet? I’m learning macros there and it seems straight forward.

          1. 2

            Yep! I’m using Common Lisp as my prototypical Lisp-2 as I try to work through and understand this.

            The thing I’m having trouble with is that if you want to call a function in a macro expansion, you have to do the whole funcall unquote sharp quote dance, or risk the function being looked up in the calling code’s scope. It seems CL tried to make this less necessary by saying there are certain functions you cannot shadow or redefine, so you only actually have to do this with user-defined functions, but that seems like such a big hack that I must be missing something.

            1. 1

              Its the same thing with variables. Common Lisp macros just don’t know anything about lexical scope. In fact, arguably, they don’t even operate on code at all. They operate on the union of the set of numbers, symbols, strings and lists of the other things. Code denotes something, but without knowledge of the lexical context, the things CL macros transform cannot even come close to being “code”.

              This is why I like Scheme macros so much. They operate on a “dressed” representation of the code which includes syntactic information like scoping, as well is useful information like line number of denotation, etc. By default they do the right thing and most schemes support syntax-case, which allows you to have an escape hatch as well. I also personally find syntax-case macros easier to understand.

              1. 1

                Yeah, I really hate that approach

          1. 1

            I’m trying to get polynomial commitments solidified in my head

            1. 1

              Good riddance!

              This makes sense when seen inline with the death of GO111MODULE. The two having separate behaviors have caused a lot of problems with those who don’t understand the history of Go with no package management. I may finally get to stop having to help my developers understand how to get things into the $GOPATH versus their mod.

              1. 1

                I love this! Great project concept!

                One question. In many frameworks, multiples of the same param key are treated as an array of values. Did you get your approach, taking the last value as canonical, from a standard out there? I always wondered if there is something that tells us how to handle that.

                1. 1

                  Thank you for the kind words! I’m not familiar with a specification for the content of a query string. I see that RFC 3986 mentions “query components are often used to carry identifying information in the form of key=value pairs.”

                  I’ve seen it handled three ways: first-value, array, or last-value. First-value may have some security benefit in particular contexts for resisting additional parameters being added to the end. Array, of course, is handy if you want to accept an array. Last-value is easy to implement. I’ve also seen conventions like “array[]=foo;array[]=bar;array[]=baz” or “foo[bar]=baz” used to encode more complex data structures.

                1. 4

                  One thing that Erlang gets right that other people miss is Hot Reloading. A distributed system that is self healing has to be able to hot reload new fixes.

                  That’s my biggest frustration with the new BEAM compilers in Rust and so on: they choose to not implement hot reloading - it’s often in the list of non-goals.

                  In a different video, Joe says to do the hard things first. If you can’t do the hard things, then the project will fail, just at a later point. The hard thing is isolated process hot reloading: getting BEAM compiled in a single binary is not.

                  1. 2

                    Hot reloading is one of those features that I have never actually worked with (at least, not like how Erlang does it!) So for possibly that reason alone I don’t see the absence of the feature a major downside of the new BEAM compiler. I wonder if the lack of development in that area is just because it is a rare feature to have, and while it seems like a nice-to-have, it isn’t a paradigm shift in most people’s minds (mine included!).

                    The benefits of it do seem quite nice though, and there was some other lobste.rs member who had written a comment about their Erlang system which could deploy updates in < 5min due to the hot reloading, and it was as if nothing changed at all (no systems needed to restart). This certainly seems incredible, but it is hard to fully understand the impact without having worked in a situation like this.