Threads for lightandlight

  1. 11

    Once you’ve added =, I like to change -> to : as well - fn foo(a: X, b: Y): Z = .... To me it’s like saying, “when a has type X and b has type Y, foo(a, b) has type Z”.

    1. 3

      This is exactly how procedure syntax works in Nim.

      proc foo(s: string): string =
        "foo" & s
      
      1. 2

        While I’m not certain of the connection, this reminds me very much of the declaration syntax C gets so much shit for. Z foo(X a, Y b) -> foo(a, b) :: Z.

        void (*signal(int sig, void (*func)(int)))(int) -> (*signal(sig, func))(some int) :: void. What is func? well, we know (*func)(some int) :: void. So signal takes (int, pointer to function taking int to void) to pointer to function taking int to void. Easy, right?

        1. 1

          I like this, though I prefer omitting the colon entirely, like sql.

        1. 1

          In these situations I use Cachegrind and kcachegrind, which I find quite good.

          1. 2

            Their program’s outputs depend on the “order” of elements in a hash map. The author doesn’t want to change that, and I’m curious why.

            In my opinion, a hash map is intrinsically unordered, and if you “iterate over” a hash map then you should think about the results non-deterministically. If you call “head” on a hash map (what ever that means), proceed as if you received one of the hash map’s elements uniformly at random.

            1. 1

              How does this fit with post-build-hook? The manual page[1] uses your “wrong solution #0”. My guess is that using a simple nix copy in post-build-hook is equivalent to your “correct solution”, because post-build-hook runs for all build dependencies. Put another way, your “correct solution” is a way to get build caching without setting post-build-hook.

              What do you think?

              [1] https://nixos.org/manual/nix/stable/advanced-topics/post-build-hook.html (archived)

              1. 1

                Yeah, I think post-build-hook would have the same effect. I think the main reason not to use post-build-hook is if you only wanted to cache some builds but not others

              1. 3

                You can’t run broken Rust programs (translating compile time errors to runtime errors).

                Feels like there needs to be a swing back toward dynamic or gradual typing. I have gotten good mileage out of gradual typing with MyPy, although performance is still a big problem.

                One of the Rust core devs is experimenting with a language with dynamic semantics first: https://dada-lang.org/docs/about (i.e. to solve precisely this problem – you can run broken programs)


                A bunch of these criticisms fall under “biformity” (which as far as I can tell is a new word, but seems like a good one for the problem)

                https://hirrolot.github.io/posts/why-static-languages-suffer-from-complexity#

                e.g. metalanguage vs. language, and async vs. sync. Also “shadow language”: https://gbracha.blogspot.com/2014/09/a-domain-of-shadows.html

                1. 3

                  In what situations would one benefit from running a program that contains type errors?

                  1. 3

                    For me the interest is in experimentation and exploration during development. Instead of getting everything to line up exactly, before you can run the program, you can still give it a whirl just to see what happens, even if it crashes. I find myself doing this anyway by commenting out broken stuff and putting a todo!() in its place - would be great if the compiler could handle this for me.

                    1. 1

                      Anything where you don’t yet understand the data or the problem. This arises in basically every programming domain, but to give a few examples, I’d say “data science”, experimental science, and reverse engineering. Or just “business”, which in its best forms are like science.

                      Often the easiest way to understand the DATA is to write CODE. But that does not imply having types.

                      I wrote a lot about that here:

                      https://www.oilshell.org/blog/2022/03/backlog-arch.html#tradeoffs-between-dynamic-and-static-types-faq

                      It’s in very condensed FAQ and link-rich form. I would emphasize the comments by Don Syme, designer of F#, a typed functional language inspired by OCaml. F# is about programming with DATA and functions, not with types, or at least not putting types first.


                      For a very concrete example, try writing down an AST (type definition) for the language of $# ${#} ${##} ${###} ${####} in bash without writing any code. From experience I’d say that it’s hard to get this right “all at once”, and experimenting with the data dynamically, over time, is the best way to get it right.

                      The Five Meanings of #. And What Does ${####} Mean?

                      There are dozens of other examples like this from bash/Oil, including some that came up this year and last.

                      Another concrete example is from some of the “Rust vs. Zig” links – i.e. dealing with the reality of hardware pins, which are not modeled by a type system. They are “reality”.

                      Types are a map or model for reality, not reality itself :) Insisting that everything must be typed before you can run it is very limiting

                    2. 2

                      Feels like there needs to be a swing back toward dynamic or gradual typing.

                      Yep! There’s been lots of good, practical progress made in static type systems the last decade or so, and for a while it looked like we’d maybe be able to say “why do we need dynamic typing at all, we can do whatever we want with static typing”. Now we’re hitting the limits of that spurt of progress, in the form of type systems that can express most things we’d want but are complex enough to be either be super inconvenient or outright undecidable. It’s probably time for the pendulum to start swinging the other way for a while, as it did when V8 was first released and it looked like we could make JIT’s so fast we could say “why do we need static typing at all, we can do whatever we want with dynamic typing”.

                      1. 2

                        True, I have seen a disheartening amount of “why do we need dynamic typing at all?” lately… Our field has a short memory.

                        The shortest answer I give now is to imagine if the entire Internet was a typed system. If you can’t see the problems with that, then you don’t understand type systems or software very well :)

                        Basically at a certain scale, you always need dynamic typing. And all type systems are domain-specific. I framed it in this post as fine-grained types vs. coarse-grained “waists”.

                        https://www.oilshell.org/blog/2022/03/backlog-arch.html#tradeoffs-between-dynamic-and-static-types-faq


                        But I also do remember some people saying that “JITs make dynamic languages fast enough”! They can recover the types! Which turned out not to really be true.

                        So yeah unfortunately it’s a bit like politics – the extreme positions tend to get magnified, whereas the truth or most effective paradigm is somewhere in between, and depends on the situation. And people can learn a lot from each other, and from history ! :-)

                    1. 1

                      I’d probably benchmark against a simple, non-JIT JavaScript interpreter, and treat statistally significant improvements in that setting as successful optimisations (even if they cause performance regressions in a particular browser).

                      At the end of the day you’re shipping a .js file, so you should use the cost model implied by the JavaScript standard. I think it’s unreasonable to expect someone to optimise against the intersection of multiple undocumented browser JIT cost models.

                      1. 7

                        This is not a good plan. The cost model implied by the standards do not remotely correspond to the cost model of any production JS engine, not even if you force them to only run with their interpreter.

                        The ECMAScript specifies behaviour such that this super simple code

                        function f(n) {
                          var x = 1;
                          for (let I = 0; I < n; I++) {
                            x = x - I
                          }
                          return x
                        }
                        

                        requires behaviour that no production JS engine would consider even in a non-JIT mode. Let’s go through what the spec requires, but that modern JS engines would not do

                        function f(n) {
                          // 1. Allocate a new activation object (I think the latest iteration of the spec doesn't call them activations?)
                          // 2. push that object onto the top of the scope chain
                          // 3. Insert a new DontDelete property labeled "n" with the value of the first
                          //    parameter into the activation object
                          //    This means allocating property storage on the activation object (as
                          //    far as the spec is concerned it's
                          //    basically a normal JS object, so the [[PutDirect]] operation here does
                          //    the same pile of hash lookups and
                          //    allocation of storage space
                          // 4. Insert a new DontDelete property labeled "x" and set it to the value 
                          //    undefined, with the same costs as for the "n" above.
                          var x = 1;
                          // 5. Allocate a new value of type Number containing the value 1
                          // 6. Walk up the current scope chain to find activation the has the property "x" - that is
                          //    a. for (currentScope = top of scope chain; currentScope != empty; currentScope = currentScope)
                          //    b.    if [[HasOwnProperty]](currentScope, "x") break;
                          // 7. Perform [[GetOwnPropertySlot]](currentScope, "x")
                          // 8. Test that the slot has not been initialize, and throw an exception if it has been
                          // 9. Perform [[PutOwnProperty]] to store the object containing the value 1 into the "x" slot of currentScope
                          // 10. allocate an object of type number, containing the value 0
                          // 11. Repeat steps 1 and 2
                          // 12. Repeat step 3 for the name "I", assigning the value from 10
                          //     for (let I = 0; I < n; I++) {
                          // 13. Repeat step 6 for "I"
                          // 14. Repeat step 7 for "I"
                          // 15. Test the slot from 14 to ensure that it has been initialized,
                          //     and throw an exception if not
                          // 16. Load the value from 15
                          // 17. Repeat steps 1 and 2
                          // 18. Repeat step 6 for "I"
                          // 19. Repeat step 7 for "I"
                          // 20. Test the slot from 19 to ensure that it has not been initialized,
                          //     and throw an exception if it has
                          // 21. Store the value from 16 to the slot from 19
                          // 22. Repeat step 3 for the name "I", assigning the value from 10
                          // 23. Repeat step 6 for "I"
                          // 24. Repeat step 7 for "I"
                          // 25. Test the slot from 24 to ensure that it has been initialized,
                          //     and throw an exception if not
                          // 26. Repeat steps 23-25 for "n"
                          // 27. Allocate an object of type Boolean to contain the result of 
                          //     perform the < operation on 25 and 26
                          //     The logic to determine how < is performed is based on the types
                          //     of the input and its not unreasonable for an interpreter to
                          //     perform those checks, so we'll exclude these from our "things
                          //     that ES says happen but don't actually happen in reality"
                          // 28. Perform [[ToBoolean]] on the object from 27, then if false
                          //     a. pop the top scope from the scope chain
                          //     b. terminate the loop
                            x = x - I
                          // 29. Repeat step 6 for "x" -- this is looking up the activation
                          //     we'll be storing to
                          // 30. repeat steps 23-24 for "x"
                          // 31. repeat steps 23-25 for "I"
                          // 32. perform [[ToNumber]] on the value from step 30
                          // 33. perform [[ToNumber]] on the value from step 31
                          // 34. allocate a new object of type number, containing the result
                          //     of subtracting <33> from <32>
                          // 35. Look up the property slot for "x" from the object you found 
                          //     in step 29
                          // 36. Perform [[Put]] using the object from 19, the slot from 25,
                          //     and the value from 34. [[Put]] checks for readonly properties
                          //     and setters
                          // 37. Repeat steps 6 and 7 for "I"
                          // 38. If I is not initialized throw an exception
                          // 39. load the value from the slot in 38
                          // 40. Perform [[ToNumber]] on the value from 39
                          // 41. Create a new object of type number containing the result
                          //     of adding 1 to the value from step 40
                          // 42. Pop the top off the scope chain
                          // 43. Repeat 6&7 for I
                          // 44. store the result of 41 to the slot from 43
                          // 45. loop back to 13
                          }
                          // 46. pop the top off the scope chain
                          return x
                        }
                        

                        Now, this isn’t 100% accurate as I’m going entirely off memory, and this isn’t a super great code writing environment :D, but the general gist is correct. The important thing though is “oh look how explicit and/or long winded the spec is”, it’s to look at what the cost model for the spec is.

                        In this model there are at least 4 object allocations on each iteration of the loop, plus a few outside of it. None of those objects would be created in a modern JS engine, so that will dramatically skew what your performance looks like.

                        There are also I think more than 12 table lookups per iteration according to the spec model, and those lookups are expensive. 15+ years ago there was a significant performance difference between

                        var a = expr1
                        var b = expr2
                        return a + b
                        

                        and

                        return expr1 + expr2
                        

                        Entirely due to the repeated property lookups.

                        The cost of property access in ECMAScript is similarly table look ups.

                        None of these costs is remotely representative of how engines work, so trying to apply the results of performance tests you have against this hypothetical “spec comparable” engine, to any other engine would not be remotely correct or usable.

                        We can look at what the non-optimizing interpreter from jsc does for the above:

                        [   0] enter              
                        [   1] get_scope          dst:loc4
                        [   3] mov                dst:loc5, src:loc4
                        [   6] check_traps        
                        [   7] mov                dst:loc6, src:<JSValue()>(const0)
                        [  10] mov                dst:loc6, src:Int32: 1(const1)
                        [  13] mov                dst:loc7, src:<JSValue()>(const0)
                        [  16] mov                dst:loc7, src:Int32: 0(const2)
                        [  19] jnless             lhs:loc7, rhs:arg1, targetLabel:19(->38)
                        [  23] loop_hint          
                        [  24] check_traps        
                        [  25] sub                dst:loc6, lhs:loc6, rhs:loc7, profileIndex:0, operandTypes:OperandTypes(126, 126)
                        [  31] inc                srcDst:loc7, profileIndex:0
                        [  34] jless              lhs:loc7, rhs:arg1, targetLabel:-11(->23)
                        [  38] ret                value:loc6
                        

                        You can see that there is no property lookup, and no allocation is needed to represent numbers in JSC.

                        Again, this is the absolute earliest layer of codegen, where the latency of source being download to having started execution is absolutely critical.

                        We can do something a bit silly to try and show what the ES model is

                        function f(n) {
                          with ({n: n, x: 1}) {
                            with ({I: 0}) {
                              for (let I = 0; I < n; I++) {
                                with ({I: I}) {
                                  x = x - I
                                }
                              }
                            }
                            return x
                          }
                        }
                        

                        Now “spec like” cost model would likely not have a significant performance impact from this horror, but the code the interpreter executes is too insane to fit in a comment, so I put it here: https://pastebin.com/REZpCSty

                        A simple test of the above shows that if we force JSC to operate without a jit the “ES like” example is approximately 50x slower. With the jit enabled the first run of the insane case is around 78x slower than the normal case. The next time round - e.g. after the optimizing stages have kicked in - the absurd case is around 430x slower than the normal version. Logging the GCs, found that the ES like path allocated enough objects to trigger 66 garbage collection cycles. The sane code version did not trigger a single GC.

                        Again the point of all this is not that the spec cost model is slower, it’s that it is very simply not relevant to real world performance.

                      1. 2

                        My (provocative) question would rather be: What are imperative languages?

                        In none of the so-called imperative languages you really do give concrete orders. You just declare the kind of result that you would like to be see at the end and the “interpreter” does it for you:

                        I would like to find in variable x the sum of y and z: x = y + z.

                        In a high-level language like Ruby the interpreter would subserviently do all the dirty work that is needed for this to happen. Making sure that all variable are compatible, finding the right kind of sum to apply, converting from int to float, dealing with overflow, changing to a Bignum if the result gets too big. I haven’t given any of these commands, I only expressed my desire for a result and the interpreter has provided me with a correct solution.

                        Actually, I have no idea if the interpreter did any of this. Maybe it had a lookup table. Or it run a backtracking search algorithm to find the right solution. Maybe it’s Prolog all the way down.

                        Why wouldn’t that fit pretty much any definition of declarative programming?

                        The same thing can also be said of C code or, even, of assembly code. In MIPS assembly

                        addiu $t0, $s3, 1

                        is a declaration of the fact that I want to associate the name $t0 to the incremented value of what is currently referred as $s3. There is no connection between this and what the hardware actually does. The hardware (or the emulator) is free to do whatever they want and to transform this code however they like (making use of the modern intricate web of renamed registers, out-of-order operations, microcoding…), as long as the result is correct.

                        In my view declarative and imperative are relative terms. A piece of code can be more declarative than another piece of code that produces the same output because it, for instance, focuses more on describing the intended result rather than spelling out all intermediate steps.

                        1. 4

                          I’m working on the idea that the class of declarative languages and the class of imperative languages can be defined in terms of what you need to do in order to write a denotational semantics for the language.

                          In a denotational semantics, you define a denotation function that maps a program (an abstract parse tree) onto its denotation or meaning. My idea is that the form of these denotation objects indicates the class of the language.

                          Consider a simple calculator language with numerals, arithmetic operators (+, -, *, /) and parentheses, nothing else. Programs denote numbers. The denotational semantics for the calculator maps each syntactically valid expression onto either a number, or an error (to model division by zero). This direct mapping onto numbers (which are the domain of this DSL) indicates a declarative language.

                          In an imperative language, there is a global state that can be changed by assignment statements, function calls, etc. In the denotational semantics for an imperative language, the denotation function maps expressions and statements onto functions that take a state S (plus other arguments) and which also return the modified state. (This is what I meant by “state threading” in another comment.)

                          It’s possible for an imperative language to have a declarative subset. You could write a separate denotational semantics for this declarative subset. Programs written in this declarative subset would themselves be declarative.

                          1. 3

                            This was my conclusion after thinking about this thread: https://lobste.rs/s/aedphq/why_does_nobody_seem_know_what_imperative.

                            An imperative language is one where the primary denotation is state transition functions.

                            A declarative language’s primary denotation is the semantic domain that the language was created for. (And to retain common usage, that semantic domain shouldn’t be state transitions)

                            To expand on your arithmetic example:

                            Imperative
                            [[_]] : Map String Int -> Map String Int
                            [[ x; y ]](s) = [[ y ]]([[ x ]](s))
                            [[ %r = int n ]](s) = Map.insert r n
                            [[ %r = add %x %y ]](s) = Map.insert r (Map.get x s + Map.get y s)
                            [[ %r = mul %x %y ]](s) = Map.insert r (Map.get x s * Map.get y s)
                            

                            This program represents (22 + 33) * 11:

                            %0 = int 22;
                            %1 = int 33;
                            %2 = add %0 %1;
                            %3 = int 11;
                            %result = mul %2 %3
                            
                            Declarative
                            [[_]] : Int
                            [[ int n ]] = n
                            [[ add x y ]] = [[ x ]] + [[ y ]]
                            [[ mul x y ]] = [[ x ]] * [[ y ]]
                            

                            This program also represents (22 + 33) * 11:

                            mul (add (int 22) (int 33)) (int 11)
                            
                        1. 3

                          the presentation is dated and some of the productions Steel uses don’t pass muster

                          How so? I did not find the presentation lacking in any such respects.

                          1. 10

                            Defining person to be man or woman is obviously false and needlessly exclusionary. The talk is itself quite good and there’s a reason I’m working from it, but that production probably wouldn’t fly today.

                            1. 15

                              This is not false, exclusionary, or transphobic or any other sort of “problematic”. It was simply an illustrative example of defining a concept as a union of existing concepts, that everybody in the intended audience could easily understand. It is not, and is not pretending to, say anything at all about sex, gender, personhood, or anything else. Pretending otherwise is disingenuous at best.

                              1. 13

                                It’s worth noting that the production isn’t great, and Steele says so himself in the typed notes. There’s no need to accuse anyone of anything that the original author hasn’t already conceded.

                                From the typed version:

                                This choice, to start with just words of one syllable, was why I had to define the word “woman” but could take the word “man” for granted. I wanted to define “machine” in terms of the word “person”, and it seemed least hard to define “person” as “a man or a woman.” (I know, I know: this does not quite give the true core meaning of the word “person”; but, please, cut me some slack here.) By chance, the word “man” has one syllable and so is a primitive, but the word “woman” has two syllables and so I had to define it. In a language other than English, the words that mean “man” and “woman” might each have one syllable—or might each have two syllables, in which case one would have to take some other tack.

                                1. 2

                                  I’m flagging this reply because I find it kind of dismissive. I also think that style of rhetoric will invite a low standard of discussion.

                                  What do I see here? An abridged transcript:

                                  Commenter 1: Author, in your article you alluded to an opinion. Could you please elaborate?
                                  
                                  Author: Sure, here's my opinion.
                                  
                                  Commenter 2: Author, your opinion is wrong.
                                  

                                  I see this as penalising the author, who so far has contributed positively to this thread.

                                  If it’s very important to you to express this opinion, we can help you do it in a way that’s clearly positive sum

                                  1. 2

                                    If we can’t disagree with someone, even with someone who has contributed positively, then we can’t have discourse. Your flag is against free discourse.

                                    1. 2

                                      I’m not trying to discourage disagreement itself; I’m trying to advocate for a style of discussion that leads to the flourishing of all participants.

                                      I clearly have not conveyed myself as well as I intended, and I don’t have the motivation to improve my original comment further. You should probably just ignore it.

                                2. 1

                                  Indeed. I wonder if we can deconstruct why that happened and fix it. I think that the main reason that the talk needs to define “man” and “woman” is so that it can talk about great men and great women of history. A modern reconstruction might go through the motions of humorously defining “person”, a two-syllable word, instead.

                              1. 11

                                A while back, I chanced upon a blog post that as of writing remains frustratingly ungooglable. It offered the hot take that being self-hosted is not necessarily the holy grail of a general purpose programming language, and that, if anything, acts as a forcing function in which a language feature that makes it easier to implement a compiler or an interpreter ends up more prized and valued than one that does not. This assertion is built on a broader, yet often easily forgotten notion in discussions about PL design: there is a wide gulf between the viewpoint of a PL designer / implementer and the viewpoint of a PL user.

                                Now, a lot of folks might think, “pffft, that’s obviously true,” but what does this gulf actually do to the differing perceptions of complexity? We can look towards a similar parallel: linguists and language learners. A lot of language learners, especially ones that start out monolingual, often express frustration at linguistic features that exist in a language they’re learning but not in a language they already know. For example, many anglophones learning Spanish often bemoan the sheer number of verb conjugations that exist, in comparison to those of English; Sinophones learning English are often flabbergasted by not just plurality, but the number of irregular plural forms in English.

                                Linguists, on the other hand, are routinely exposed to linguistic features, even those of languages they don’t speak with fluency. So what happens when you task these two demographics with making a conlang, a constructed language? In the wild, I’ve observed that in their initial iterations, language learners tend to be tempted by the urge to create a language with less linguistic features, e.g. forgoing grammatical gender, cases, plurality, etc. That is, until you give them fables to translate into this conlang of theirs. Schleicher’s fable is one such example, and it’s an exercise to test whether a conlang has developed a big enough vocabulary and syntax to the point where it can express a short tale.

                                It turns out that to be capable of conveying at least a rudimentary human idea, a language, even a constructed one, must possess a minimum amount of complexity. To project this back into the land of programming: most users of a general purpose PL don’t use the majority of the surface area provided by said PL, and that usually even applies to the maintainers and spec writers of said PL.

                                Which brings us back to Steele’s point:

                                To summarize, Steele posits that a language must BOTH be “large enough” to be “useful”, and yet be “small enough” to learn. The conceit of the talk and core premise being that being of “small” size and being “simple” conflate.

                                For general purpose PLs, there is a surprisingly high floor for complexity. One can hide some of that by not offering certain language features, but the end result is akin to shoving stuff under a rug, with all the userland solutions bulging out incongruently. An example that comes to mind is that prior to Go 1.18, there was no way for a user-defined type to be iterable using the range clause of a for statement. This resulted in a proliferation of ForEach methods in many Go libraries that offered its own data structures.

                                  1. 1

                                    Oh this is excellent. Thanks!

                                1. 10

                                  Here’s the formalisation you’re looking for:

                                  Both objects and closures are modelled by a particular pattern involving existential types.

                                  Here’s the definition of closures:

                                  type Fn a b = exists e. {
                                    env : e,
                                    code : (e, a) -> b
                                  }
                                  
                                  apply : (Fn a b, a) -> b
                                  apply(f, x) = f.code(f.env, x)
                                  

                                  And here’s a “mutable counter” object that’s often used in OO examples:

                                  type Counter = exists s. {
                                    state : s,
                                    next : s -> (),
                                    read : s -> int
                                  }
                                  
                                  new : Counter
                                  new = {
                                    state = Ref.new(0),
                                    next = Ref.modify (+ 1), 
                                    read = Ref.get
                                  }
                                  
                                  next : Counter -> ()
                                  next c = c.next c.state
                                  
                                  read : Counter -> int
                                  read c = c.read c.state
                                  

                                  The pattern has a generic name: the greatest fixed point of a functor (I will expand on this by request). Types that can be represented by the greatest fixed point of a functor are called conductive types, or codatatypes.

                                  Objects and closures are both codatatypes.

                                  1. 3

                                    the greatest fixed point of a functor

                                    So I know what a (greatest) fixed point of a function is. I know an analytical way to calculate it and, as is often necessary, a numerical way to approximate it. What I’ve never managed to figure out is what the relationship between that kind of fixed point and the fixed point mentioned here is. What is the function here, how do you verify this is the greatest fixed point and why does it even matter that it is a fixed point in the first place?

                                    1. 2

                                      The fixed point of a function f : A -> A is some x \in A such that f(x) = x.

                                      Category theory generalised this definition by replacing “set” with “category”, “element” with “object”, “function” with “functor”, and “equality” with “isomorphism”.

                                      So a fixed point of a functor f : C -> C is some object x of C such that f(x) is isomorphic to x.

                                      These fixed points form a category, and the greatest fixed point of a functor is the terminal object in that category. I won’t go further in relating this to set/order theory beyond saying that “top” elements are often analogous to terminal objects.

                                      To show that a type is the greatest fixed point of a functor, show that for every fixed point of this functor, there’s a unique morphism from that fixed point to the greatest fixed point.

                                      The greatest fixed point of functors on Type looks like this: type GFP (f : Type -> Type) = exists x. { seed : x, unfold : x -> f x }. Proofs are left to the reader :)

                                      We can show that GFP fis isomorphic to Closure a b for a particular choice of f. The same goes for Counter. In general, each codatatype is characterised by a functor.

                                      For closures, that functor is type ClosureF a b x = a -> b, and for the counter it’s type CounterF x = { next : (), read : int }.

                                      The fixed point machinery is trivialised in the examples I gave, because they’re not self-referential. An infinite stream is an example of a “corecursive” (self-referential) codatatype. It is characterised by this functor: type StreamF a x = { head : a, tail : x }.

                                      At this point I’m running out of steam. I’ll finish by saying: all this theory (algebraic + coalgebraic semantics) doesn’t really matter to programmers. But codata is a concept that programmers use every day without realising. The next step is to provide explicit access to that concept in programming languages.

                                      Other resources:

                                      • Adámek, J., & Koubek, V. (1979). Least fixed point of a functor. Journal of Computer and System Sciences, 19(2), 163-178. - explains the least fixed point of a functor, which is the dual to the greatest fixed point.
                                      • Haskell’s recursion-schemes package - fixed points, folds, unfolds, and fusion
                                      1. 1

                                        OK, thank you, that does clarify some things. What are the properties of the GFC of a functor that makes the Type it represents interesting/useful? Are there examples of someone coming upon some functor and, perhaps unexpectedly, discovering its GFC is a useful Type? So what I’m trying to understand with these questions is: why is it interesting that certain Types are GFC’s of certain functors? What can one do with that knowledge? If it enables us to sometimes discover/construct new useful Types, then that would finally explain to me why we would want to know this.

                                    2. 1

                                      The pattern has a generic name: the greatest fixed point of a functor (I will expand on this by request).

                                      Would be great to read more on this; while the concept itself is intuitively clear to me, I’d very much like to hear what people can get out of a formalized version (pointers to reading material welcome).

                                      1. 1

                                        Here’s a response I wrote for another poster: https://lobste.rs/s/ajgdnb/illustrating_duality_closures_objects#c_jgwwru

                                        Hopefully you also find it helpful.

                                        1. 1

                                          I’m not sure about the generic form of the pattern, but Codata in Action is all about the connection between objects and codata and so might be interesting to look at. I’ve not had the chance to sit down and digest it personally though. Looking through the earlier sections seem reasonably accessible. There’s some more detailed technical parts later on, but if your not comfortable with that you can always jump off there and perhaps skip to the conclusion.

                                        2. 1

                                          Thanks so much for explaining this! I really appreciate it.

                                        1. 1

                                          For someone without the necessary background, what exactly are abstract machines here? The wiki seems to suggest it is just a lower level language that is higher than the end goal (machine code) but lower than the current level. E.g. opcodes for a VM. Is this the abstract machine referred to here?

                                          1. 2

                                            I would guess so.

                                            The problem is that every programming language, high or low level, has a corresponding abstract machine. So to say that “abstract machines don’t do much well” is misleading. If you compile to x86_64, you’re not avoiding abstract machines, you’re just targeting a different abstract machine.

                                            I think the authors who are “dissatisfied by abstract machines” are just dissatisfied with their choice of intermediate representation.

                                            1. 2

                                              I think abstract machines here mean something different from intermediate representation. Note that SSA is mentioned as an alternative to abstract machines, but SSA is an intermediate representation.

                                              The next to the last slide in Leroy’s presentation suggests that abstract machines here are useful for implementing bytecode interpreters, so vrthra’s “opcodes for a VM”. I think what is being said is that representations suitable for implementing bytecode interpreters are often not the best for intermediate representation of a native code compiler. When you state it that way, it is kind of obvious: I mean why they would be, compilers are not interpreters after all.

                                              1. 1

                                                For any given Turing-complete language, there are infinitely many abstract machines. However, I’m not sure that any of them are in natural correspondence. For example, what’s the abstract machine for Wang tiling? I can imagine several abstract machines, but I don’t see why one of them is naturally the corresponding abstract machine.

                                                This isn’t a facile objection. Consider Turing machines, and also consider Post’s correspondence problem for Turing machines. Which is the corresponding abstract machine: the Turing machine itself, or Post’s machine which assembles toy bricks? Is the deterministic approach the natural approach, or is non-determinism more natural?

                                              2. 1

                                                An abstract machine is a machine that isn’t implemented in hardware. They are slower to execute than hardware, since they must be emulated. They are also more flexible than hardware, allowing reprogrammable behavior at runtime.

                                                If an abstract machine is to be used in a compiler, then it needs to be between the user and the hardware. The idea is that the user’s code targets the abstract machine, and then the abstract machine is either implemented in software or its behaviors are compiled to native code for the target hardware. The abstract machine forms a “narrow waist”, a point in common between compilers to different hardware, just like an intermediate representation.

                                                Abstract machines are also found outside compilers. For example, emulators are abstract machines.

                                              1. 5

                                                I think unification-based type inference is more than a hack. It addresses a fundamental weakness of traditional bidirectional typing systems. I think that a combination of the two - a bidirectional typing system with unification - will be the best approach.

                                                The weakness is that bidirectional typing systems do not talk about “partially-known” types.

                                                Bidirectional typing systems require “complete” types, but the inference rules of the lambda calculus fundamentally give us partial information about types.

                                                For example, the inference rule for lambdas:

                                                Ctx, x : A |- e : B
                                                -----------------------
                                                Ctx |- \x -> e : A -> B
                                                

                                                is translated to this rule in a bidirectional typing system (where <= means “check” and => means “infer”):

                                                Ctx, x : A |- e <= B
                                                ------------------------
                                                Ctx |- \x -> e <= A -> B
                                                

                                                While it’s true that upon examining the \x -> e we don’t know A or B, we’re not completely in the dark. At this point we know we’re looking at a function type, regardless of its inputs and outputs.

                                                So it would be better to say that lambdas synthesise a partial type:

                                                Ctx, x : ?0 |- e => B
                                                -------------------------
                                                Ctx |- \x -> e => ?0 -> B
                                                

                                                Mandating that “introduction forms are checked” is overly restrictive, which is why it comes with the (necessary) caveat: “but you can add synthesis rules for cases where the types are obvious”. Saying that “introduction forms synthesise partial types” might be a monotonic improvement.

                                                1. 1

                                                  wai is an example of this pattern “from the inside”.

                                                  Service corresponds to Application, and Filter to Middleware.


                                                  Unrelated: I don’t see a functor here.

                                                  1. 10

                                                    Call me an optimist, but we have achieved amazing things in the last 40 years. Many of them are indirect or not mainstream, but still amazing.

                                                    The first indirect thing we have is really strong type systems. Every single useful feature in type systems came out of PL research, even if we don’t all use Standard ML or Haskell.

                                                    An example of a off-mainstream amazing achievement is sel4. Ok, it’s still a lot of work to formally verify a relatively small micro kernel, but we did it.

                                                    Project Everest has gotten verified cryptographic software into mainstream browsers.

                                                    TLA+ is getting used at Amazon and Elasticsearch.

                                                    Tools like Jepsen apply property-based testing to real world projects.

                                                    There are practical successes everywhere, even if the average project doesn’t use research ideas directly.

                                                    1. 4

                                                      I think many of those fields are not part of the author’s definition of “software engineering research”.

                                                      I’d be surprised if they tried to argue that we have made very little progress in programming languages , cryptography, or formal methods.

                                                      1. 2

                                                        Right. What other research fields are there really? Almost everything boils down to PLs or formal methods in some way.

                                                        1. 7

                                                          Databases, engineering practices, distributed systems, CS education, most things involving performance, defect detection, version control, production surveys? Those are just the research fields I recently read papers in.

                                                          1. 1

                                                            Nice, I was honestly having trouble thinking outside of the research that I tend to look at.

                                                            1. 5

                                                              Also, “software engineering” is a specific research area about the methods by which software is produced.

                                                    1. 5

                                                      fathom is another project to watch in this space.

                                                      1. 2

                                                        Thanks for the mention, I’m working on Fathom right now!

                                                        Responding to some of the criticisms of Katai:

                                                        parsing a kaitai schema is significantly more involved than parsing an openapi schema. one contributing factor is that kaitai features an embedded expression language, allowing almost any attribute to be bound to expressions involving other fields, rather than simple constants. this makes is hard for an ecosystem to thrive because of a high technical barrier for entry.

                                                        The tricky thing about real-world binary formats is just how interwoven they are with programming logic: often they start off implemented as some random bit of C code and then we have to go back and pick up the pieces in order to attempt to describe them declaratively and safely (and ideally in a way can be checked using a proof assistant in the future). It’s a massive pain.

                                                        the question of serialization remains; however, the technical challenges seem largely connected to the use of a feature in kaitai called “instances” rather than “sequences”. this feature allows defining a field at a fixed offset, rather than defining it in context to the field before or after. this feature is obviously very useful to reverse engineers, who may understand only a certain chunk of a binary blob and need to annotate its fields before moving on, but it wouldn’t be a desireable feature in a schema that serves as the source definition for a binary format.

                                                        This is really hard to avoid if you want to deal with real world formats. Formats like OpenType are sprawling, with offsets and pointers to other parts of the binary data, sometimes with mutual dependencies between offset data.

                                                        Ultimately it sounds like the author really wants a restricted language for restricted a subset of new binary formats. If you have the luxury of writing your own binary format from scratch, then metaformats like Protocol Buffers or Cap’n Proto could offer a more restricted approach that might make this kind of thing easier. I’m not sure how they fare for binary files though, as I’m pretty sure they are more designed to describe streaming data.

                                                        While I’m here, I’d also like to call out some additional tools that I think don’t get enough airtime:

                                                        • binary-data: DSL for parsing binary data for Dylan
                                                        • Everparse: parser tools that extract an ‘RFC’ format to verified F* code and ultimately C, used as part of Project Everest (formally verified TLS stack by Inria and Microsoft) (paper)
                                                        • Narcissus: a DSL embedded in Coq for formally verified descriptions of binary data formats (paper)

                                                        More interesting tools and languages can be found at dloss/binary-parsing.

                                                      1. 2

                                                        Warning: I don’t know if this is correct or meaningful. I found a mistake this morning, which suggests that it has degrees of wrongness, but that does not imply degrees of truth.

                                                        1. 3

                                                          As one data point: I found it hard to make any sense of.

                                                        1. 1

                                                          I really wish there was the ability to have lexically-scoped type synonyms. So many gnarly type signatures would be way more comprehensible if I could use a name for parts of them. As it stands it feels almost like assembly where there are only global variables.

                                                          1. 1

                                                            Do any examples come to mind where you’d like to use this feature?

                                                            1. 2

                                                              I think when using Servant (https://docs.servant.dev/en/stable/tutorial/Client.html) it would have come in handy. You can get pretty far with type synonyms, but it’s a global namespace which has all the classic problems that come with not being able to scope names

                                                          1. 2
                                                            The relationship between tools and workflows

                                                            The talk of ‘workflows’ reminds me of The Tao of Hashicorp.

                                                            What do they mean by ‘medium’?

                                                            ‘Tools’, ‘workflows’, and ‘context’ and their relationships made sense; it was fairly easy for me to see how my experience maps onto these concepts. Not so for ‘medium’.

                                                            The author begins with the following:

                                                            We talk in general terms about Tools for Thought, and I think that term is too vague to be useful.

                                                            I also found the author’s explanation of ‘medium’ too vague to be useful. I think the article would still make sense if all the references to ‘medium’ were removed.

                                                            I think examples would help me here. The author says:

                                                            There are actually many different medium for thinking.

                                                            Such as? A sentence like that should definitely be followed by examples!

                                                            Adding to my confusion, the author seems to contradict themself in the next paragraph:

                                                            Mediums for thinking are in turn mediums for communication and there are actually quite few [mediums for thinking and communication] in all of human history.

                                                            Are there many mediums for thinking, or few mediums for thinking?

                                                            1. 2

                                                              In my opinion, you use “code editor” to equivocate between two different tools. One is for programmers, and it facilitates programming. We usually call these IDEs. The other tool is for product designers, and facilitates product development without programming. This is closer to the prototyping or WYSIWYG tools of today.

                                                              I think this equivocation is a disadvantage when it comes to answering some of your initial questions:

                                                              What metric can we even use to measure the perfect code editor? How will we know if and when we have it? Are we close to reaching that point?

                                                              You can’t hold IntelliJ and Figma to the same standard, nor VSCode and Squarespace.

                                                              This is why the problem statement appears contradictory: two different solutions to two different problems are compared as if they were of the same kind.

                                                              1. 3

                                                                I haven’t personally experienced it yet, but I think that this style of focus is ultimate needed for a successful project. In my opinion, it’s better to excellently serve a few people’s needs than mediocrely serve many people’s needs. Also, trying to design up front for so many hypothetical use-cases is a good way to not finish the project.