Threads for ianthehenry

  1. 6

    I mean, what human can interpret *++argv?!

    I now believe the reason is fairly simple: in essence, OpenBSD’s style squeezes more code onto the screen.

    I don’t see how one can come to such a conclusion based on the echo source code shown as an example: there are blank lines separating logically related lines of code, the variable name (nflag) could have easily been called just n or perhaps nf (n is traditionally used for size/length/count), etc. Overall, I found the code quite easy to follow and there is definitely much denser code out there where you will hardly see a blank line.

    But I think the first quote hints at the real reason (IMO, anyway): the example is easy to follow because it doesn’t shy away from idioms that pack a lot of meaning into very little code.

    1. 1

      J/APL, and their inspired golfing languages come to mind when thinking of the upper limits of terseness.

      1. 9

        I was researching array languages recently and came across this interesting quote in the README of an open-source K implementation:

        A note on the unusual style of C code: It attempts to replicate the style of Arthur Whitney. A striking original example is contained in file https://github.com/tavmem/buddy/blob/master/a/b.c. There are 2 versions of the buddy memory allocation system. The first is in 11 lines written by Whitney. The second is in well documented traditional C (almost 750 lines).

        The example is, indeed, striking.

        I MZ[31]={1};Z I *MM[31];mi(){MZ[7]=MZ[13]=MZ[19]=MZ[25]=2;DO(30,MZ[i+1]+=MZ[i]*2)}
        Z mmr(n,i){if(i<18)i=18;R err(2,n),tmp((MZ[i]+2)<<2),1;} /* Dan MZ[i+1]? */
        C *mab(m)unsigned m;{I *p,*r,i=2,n=m;for(n=(n+3)>>4;n;n>>=1)++i;
         do{if(p=MM[i])R MM[i]=(I*)*p,(C*)p;for(n=i;n<30;)if(p=MM[++n]){
          for(MM[n]=(I*)*p,p[-1]=i;i<n;)r=p+MZ[--n],MM[r[-1]=n]=r,*r=0;R(C*)p;}
          if(mc()>=i)continue;} while(mmr(m,i));}
        I *ma(m){R(I*)mab(m<<2);}
        mf(p)I *p;{I i=p[-1];*p=(I)MM[i],MM[i]=p;}
        mb(p,n)I *p;{I i=31,j;for(n-=2,++p;i--;)if(j=MZ[i],j<=n)n-=j,*p=i,mf(p+1),p+=j;}
        mc(){R 0;}
        I *mz(){Z I b[31];I *p;DO(31,for(b[i]=0,p=MM[i];p;p=(I*)*p)++b[i])R b;}
        

        Although the rest of the code – the Whitney-inspired code – is equally baffling to my eyes.

        I suppose that, in order to invent or implement K, you have to place a pretty high value on terseness.

        1. 3

          This is a rather famous block of code, but each time I see it I feel further away from any precise conclusions about it. It seems clear that the authors of J/K/etc. are comfortable with this style and our ability or inability to read it is pretty irrelevant since we’re not likely to find ourselves maintaining it or patching it.

          The whole APL family sets up different priorities than the rest of computing. For instance they like “idioms” more than functional abstraction, because apparently it’s both difficult to name some of the idioms in a useful way, and when you do, the name tends to be larger than the idiom itself. Also, at least J (but I believe most APL family languages) makes liberal use of “special code” that notices certain idioms and has optimized code to handle those cases.

          I have come to see it as a completely different coding “civilization,” and I won’t pass judgement on it because I have seen how much work it would take to reorient myself to it, and I just won’t ever find the time to do so.

          1. 1

            Most programming languages have idioms. And apl does not lack functional abstractions. Special code seems rather similar to the open-coded algorithms based on simple graphical pattern matching which can be found in most compilers.

        2. 3

          I came in to say….

          Gradually I started to realise that not only could I cope with OpenBSD’s terse code style, but it was actually easier for me to read it than code I’d written myself.

          If you want the best teacher for this lesson, learn APL or J or K or BQN, and keep learning until that code no longer seems dense, but simply “just the right amount to express what I want”.

      1. 5

        There’s a convention in C – and even some other languages, I’ve heard – to call loop indices i. Would it be easier to read code that used variables named index instead?

        I’d hazard that the answer might be yes, for a beginner: if it’s your first time seeing a for loop, having to pick up the local slang at the same time you’re wrapping your head around the concept of iteration might be distracting noise.

        But you can have both: you can present non-idiomatic code to beginners, who are already having enough trouble parsing a brand new syntax and understanding brand new concepts, and then explain common conventions afterwards.

        So to repeat: Haskell code is too short. In particular, a ton of the example code in various resources designed to teach Haskell to beginners is too short.

        I agree with the second part of this, but the generalization sounds like “You should be writing for (int index = 0; ...).”

        (But is the term “index” any more meaningful, to someone who is encountering index-based iteration for the first time? Perhaps that term is just as arbitrary as i, to a beginner.)

        map :: (a -> b) -> [a] -> [b]
        

        Does this clarify that x above is a thing while xs is a list?

        This is interesting to me: you know xs is a list because it appears on the right-hand of the : constructor. You don’t need a type annotation for that; it can’t not be a list. But of course you can’t know that if you aren’t used to reading Haskell.

        map f (element:list) = (f element) : map list
        

        I don’t think this is really much better than x:xs. Sure, list is a list. But lots of things are lists. It doesn’t tell you anything about this list, except its type – which the type signature already tells you.

        map f (head:tail) = (f head) : map tail
        

        Do the terms “head” and “tail” make more sense? That’s probably another learned convention.

        map f (car:cdr) = (f car) : map cdr
        

        Crystal clear.

        map f (first:rest) = (f first) : map rest
        

        I think that’s how I would write it.

        map f (firstElement:restOfTheElements) = (f firstElement) : map restOfTheElements
        

        Is “element” a learned term, or an intuitive one for native English speakers? It’s hard to remember.

        On single letter module imports: this feels like a very different sort of argument than “element is more clear than x.” Single-letter module imports make code harder to read difficult precisely because it’s not a convention: if every file has its own set of custom module abbreviations, I have to learn what all of them are supposed to mean before I can read the file. (I think there’s an argument for consistent abbreviations used across a codebase, which have a one-time cost associated with learning them, but I don’t think that’s what the article is raging against.)

        I was interested to see that the blog is generated in Hakyll. I wonder if that was the case when the article was posted (nearly a year ago). Seems not.

        1. 8

          When I was being taught Haskell, everywhere I would see functions name “sht” or “rp” or some other combination of up to 4 letters. I didn’t like this at all, even way past the “beginner” stage, because it was impossible to decipher what the heck a function or variable was. So I started writing code with descriptive variable names: withNoDecreasing, recordFixpoints, and so on, and I write code like that to this day.

          But what I notice is that it becomes increasingly hard to see patterns in your code when the variable names get very long. Things that would look very familiar, like f <$> m1 <*> m2, would start getting really “stretched out”, to the point where maybe a part of the expression is on the next line, or put into a let expression to avoid making the line 200 characters long. Now it’s not so clear what’s going on; and now, too, you’re pushed to assign names to previously “intermediate” computations.

          My point is, I didn’t just learn conventions such as x and xs, a and b, or m and f, but I also learned the “shape” of common types of computations (like a function being applied to two arguments within an applicative functor). Renaming the variables doesn’t make the code more readable to me for the reasons you described, but also because I can’t recognize these “shapes”.

          1. 2

            Now it’s not so clear what’s going on; and now, too, you’re pushed to assign names to previously “intermediate” computations.

            And this is usually a good thing. Decompose and abstract until it fits your screen and brain.

          2. 5

            I like the way you’ve thought through this, kudos! Personally, I agree that first:rest is probably the most intuitive syntax, and has the benefit of being relatively short as well!

            For index-based iteration, I try to use names that have meaning within the problem domain. For example, rowIndex and columnIndex if I’m iterating a matrix generically, or personIndex for iterating a list of Person records (or whatever). Part of the value I see here, though, is that it is harder to mix the indexes up in a nested loop. It also facilitates moving code around as there’s less chance of accidentally shadowing i.

            1. 3

              Haskell conventionally abbreviates index to ix, which I rather like. If I have a list foos, then I might have an index into that list called fooIx, which I rather like. Reads much better than i and then j to me.

              1. 1

                I see an actual problem if you use both i and j in nested loops - those are so easily mixed up at a glance.

              1. 3

                What/where is OCaml used? What is the language like, compared to “mainstream” things? I think I sometimes see mentions of OCaml on my Linux day-to-day, but I don’t know much beyond that.

                Is it something used a lot, or is it very niche (kinda like Rust vs Zig maybe)?

                1. 5

                  Comparing it to more mainstream things: it’s a bit like Go, in that it’s an ahead-of-time compiled language that can produce fast, native binaries, but with a runtime that includes a garbage collector.

                  It’s a strictly evaluated imperative language, and you can read the code and predict quite accurately what instructions the compiler will produce. You can also write in a pretty high-level functional style: OCaml has a very wide range from “high level” to “low level” coding. You basically never have to call out to C in order to do something “fast enough,” as long as you avoid heap allocations (the GC only runs in response to heap allocation, you can get a very strongly typed language without any runtime overhead if you’re careful).

                  It also has a type system reminiscent of Haskell’s, which means you can make massive changes to large codebases pretty fearlessly. But — unlike Haskell — OCaml supports implicit side effects (like most languages), so it doesn’t have much of a learning curve. It also lacks typeclasses, and most of the other fancy type system things that make Haskell tricky to learn.

                  OCaml also has a shockingly good JavaScript backend — you can compile OCaml code to JS and use that to share code between client and server, if you’re doing web stuff. Autogenerate correctly typed APIs and stuff (if, you know, your only clients are also using OCaml). I don’t know any other language that comes close here.

                  Subjectively: OCaml is a very ugly language, with lots of weird syntax and strange language warts. But if you can look past that, it’s a very practical language. It’s not fun the way that Haskell is, but it’s old and stable and works well, and the type system is the best you’re going to find in an imperative language. (Reason — an attempt to provide an alternate syntax for the OCaml compiler — was disappointingly incomplete the last time I checked. Don’t know if it’s still a thing.)

                  But the community is very small. Jane Street publishes some very thorough libraries covering all the basic stuff — containers, async scheduling, system IO, etc — but coverage for what you might think of as basic necessities (especially if you’re doing web development) is a lot more spotty.

                  So it occupies sort of a weird place in the world. It’s a solid, conservative, relatively performant language. But you probably don’t want to build a product on top of it, because hiring will be pretty expensive. And I don’t think it’s particularly interesting from a mind-expanding point of view — Haskell has a lot more bang for the buck there.

                  1. 2

                    It’s a strictly evaluated imperative language

                    What’s your definition of imperative? If you limit “functional” to “pure”, then it’s quite against the mainstream opinion that classifies Scheme and often even CommonLisp as “functional”. Presence of mutable values does not make a language non-functional—absence of support for first-class functions and primitives for passing immutable values between them does.

                    Most real-life OCaml code, at least in public repositories, is as functional as typical Haskell, i.e. centered around passing an immutable state around, with wide use of benign side effects (like logging) and occasional use of mutable values when it’s required for simplicity of efficiency.

                    (For the uninitiated, you need to declare mutable variables or record fields explicitly, by default everything is immutable, unlike in Scheme)

                    It also lacks typeclasses, and most of the other fancy type system things that make Haskell tricky to learn.

                    What you aren’t saying and what someone who doesn’t know it yet may want to hear is that lack of type classes makes type inference decidable. With any “normal” (non-GADT) types, the compiler will infer types of any value/fuction automatically. There are no situations when adding an annotation will make ill-typed code well-typed. The only reason to add type annotations is for humans, but humans can as well view them in the editor (via Merlin integration).

                    Well, module interfaces do need type annotations. Which is another thing you seem to dismiss: the module system. Functors provide ad hoc polymorphism when it’s required, and their expressive power is greater. My recent use case was to provide a pluggable calendar lib dependency for a TOML library. OCaml is the only production-ready language that allows anything like that.

                    But — unlike Haskell — OCaml supports implicit side effects (like most languages), so it doesn’t have much of a learning curve. it’s particularly interesting from a mind-expanding point of view — Haskell has a lot more bang for the buck there.

                    Not mind-expanding for someone who already saw dependently-typed languages for sure. For someone with only Go or Python background, it’s going to be as mind-blowing as Haskell, or any actually functional language for that matter.

                    Technically, it’s possible to write OCaml as if it was Pascal, but it’s neither what people actually do nor something encouraged by the standard library. People will also run into monads pretty soon, whether a built-in one (Option, Result) or in concurrency libs.

                    Jane Street publishes some very thorough libraries covering all the basic stuff — containers, async scheduling, system IO

                    My impression is that the last time you looked was quite a while ago. Sure they do, but for each of those there’s at least one non-JaneStreet alternative, in case of Lwt, more popular than the JaneStreet one. Compare the reverse dependencies of Async vs Lwt.

                    Sure, that community is still smaller than those of many other languages, but it’s far from “you will never find a lib you need”.

                    1. 1

                      What’s your definition of imperative? If you limit “functional” to “pure”, then it’s quite against the mainstream opinion that classifies Scheme and often even CommonLisp as “functional”.

                      By imperative I mean that OCaml has statements that are executed in order, as opposed to something like Prolog or APL or a (primarily!) expression-oriented language like Haskell. I avoided calling it a “functional language” because I don’t know what that term means to the person I was replying to. I would describe OCaml as functional as well. I don’t think the label is mutually exclusive with imperative.

                      What you aren’t saying and what someone who doesn’t know it yet may want to hear is that lack of type classes makes type inference decidable.

                      If this tips anyone over the fence into learning OCaml, I will be delightfully surprised :)

                      Which is another thing you seem to dismiss: the module system. Functors provide ad hoc polymorphism when it’s required, and their expressive power is greater.

                      I think you’re reading more into my comment than is really there. I was trying to give a rough overview of “what is OCaml” to someone who does not know OCaml. The module system is neat. I’m not dismissing it. Typing on a phone takes a long time.

                      Not mind-expanding for someone who already saw dependently-typed languages for sure. For someone with only Go or Python background, it’s going to be as mind-blowing as Haskell, or any actually functional language for that matter.

                      Yeah, this is fair. If the choice is between OCaml or nothing, definitely study OCaml! But Haskell has a larger community, a lot more learning resources, and will force you to think differently in more ways than OCaml. Which makes it hard to recommend OCaml to someone who is functional-curious, as much as I personally like the language.

                      My impression is that the last time you looked was quite a while ago. Sure they do, but for each of those there’s at least one non-JaneStreet alternative, in case of Lwt, more popular than the JaneStreet one. Compare the reverse dependencies of Async vs Lwt.

                      From this response I get the impression that you read my comment as “the only libraries that exist are the ones Jane Street published.” What I meant was to assure the person I was replying to that OCaml has a healthy set of basic libraries available, with an existential proof of that statement.

                      Sure, that community is still smaller than those of many other languages, but it’s far from “you will never find a lib you need”.

                      We are in complete agreement here.

                    2. 2

                      Subjectively: OCaml is a very ugly language, with lots of weird syntax and strange language warts. But if you can look past that, it’s a very practical language. It’s not fun the way that Haskell is, but it’s old and stable and works well, and the type system is the best you’re going to find in an imperative language.

                      the syntax does have its share of odd corners, but i don’t find it ugly on the whole. i quite enjoy working in it. also, having given both a decent try, i found it more fun than haskell, and ultimately it was the fp language i ended up sticking with.

                    3. 2

                      The “standard” reply is a company called Jane Street, that apparently requires every employee(?) to take a course in OCaml.

                      1. 6

                        It is not used a lot, but its user base is growing fast recently. Companies/institutions using Ocaml also include: Citrix (xenserver), Facebook, Bloomberg (where rescript was born), Tezos, Ahrefs, INRIA (COQ to name one), Aesthetic Integration, Tarides, to name a few. It is used a lot for writing compilers (also Rust started with an OCaml implementation) but it is a pretty good language for system programming, most general purpose programming in fact.

                        The community is not huge, so you don’t have as many libraries as other languages do, but the ones that are there are usually pretty solid

                      2. 2

                        It’s getting fairly popular. I have posted Haskell and OCaml skills in HN Who’s hiring threads, and getting tons of emails back lately due to the OCaml part. I know OCaml (and SML) for a good 15 years, and it has gone from really niche to decently easy to find a job that uses it.

                        I think this is due to the increasing popularity of functionaly programming and modern type systems. Aside from this, Facebook and many others use it for building static analyzers. Furthermore, it’s a good companion for Coq.

                        Sadly, Haskell is still quite unpopular, but that’s a topic for another discussion.

                        1. 2

                          To complement the sibling replies, consider: OCaml is already a mainstream language. You are most likely to experience it on your desktop through FFTW, a ubiquitous signal-processing library which has been available for a couple decades.

                          1. 1

                            A lot of people have answered your question, I’d also add that F# is a direct descendant of OCaml and shares a lot of the core syntax. It’s probably more widely used than OCaml and seems to be the CLR language that people get the most enthusiastic about.

                          1. 3

                            git can detect when blocks of code are deleted in one place and added in another place, and highlight the actual differences:

                            https://github.blog/2018-04-05-git-217-released/

                            It sounds like you are facing something arcane enough that these heuristics will not help much, but maybe some combination of options will make the re-creation easier.

                            1. 2

                              That indeed helps. Great tip!

                              The diff is still just as long, though, so I still think there is a case for an extended diff format.

                              I’ll try it out (git config --global diff.colorMoved zebra) and see if I get used to it.

                            1. 6

                              https://i.imgur.com/S0bstNs.png

                              Or if I need to accelerate to attack velocity:

                              https://i.imgur.com/ohNWhhS.png

                              Keyboard is a Kyria: https://ianthehenry.com/posts/kyria-build/a-wireless-ergonomic-keyboard/

                              iPhone is an iPhone (running Shelly).

                              Desk is a piece of galvanized steel bent around a 1/4” piece of cork sheeting, covered with a piece of tooling leather. Magnetic tripod mounts support the phone and (sometimes) the keyboard halves. Pretty lightweight, portable setup, so I can work outside on nice days. Not quite as stiff as I’d like and I’ll probably keep iterating.

                              I don’t actually code like this, but it’s great for writing prose.

                              1. 5

                                I’m bookmarking people with ‘ergonomic mobile computing’ setups (bonus if they can work outdoors), neat to see yours.

                                Here’s mine: https://twitter.com/vivekgani/status/1475213967303790595

                                1. 2

                                  Neat! I also use an iPad with a larger lap desk and a big clamp mount that lets me position it right at eye level, which is a pretty great setup for coding – but it’s too bulky and heavy to carry around very comfortably so I never use it outside.

                                  I’ve experimented with an M5Paper for better sunlight support… but the latency is pretty bad. Now that I have a Bluetooth keyboard I should take another stab at the Remarkable 2…

                                  1. 2

                                    Would you share you bookmarks? That’s a topic I’ve been interested in for a while. Just got an external USB Keyboard working with my Remarkable 2 and could use some inspiration before the outdoor season starts here in central Europe :)

                                    1. 2

                                      How did you get a USB keyboard working with the Remarkable 2? I have an external battery pack and a USB data/charging splitter and it’s a pretty clunky setup.

                                      1. 2

                                        I got the same, a splitting “y”-cable and an USB OTG adapter. Using my phone instead of a powerbank as a power source (and access point)

                                      2. 1

                                        So far it’s just the parent poster and the person I replied to in my Twitter link 🙃. Someday hoping to write a blog post about it.

                                        1. 1

                                          update: I’ve put all my ‘ergonomic mobile computing’ bookmarks into a subreddit: https://www.reddit.com/r/ErgoMobileComputers/

                                          1. 1

                                            Thanks!

                                        2. 1

                                          Would you mind going over the parts you use?

                                          1. 1

                                            Sure it is:

                                            • Tablet: an HP elite X2 g4, with the hiDPI screen option. There’s more options now like the surface pro 8 (full Linux support isn’t quite there) and Asus ROG z13. A 2-in-1 laptop would also suffice.

                                            • Stand: Tiny Tower Stand - a small Kickstarter company made these and they went out of stock last year but supposedly they might be making a newer version.

                                            • Keyboard: Nuphy Nutype F1. Wasn’t expecting to fall into the mechanical keyboard trend but this had the size footprint I was looking for.

                                            • Trackpad: Apple Magic Trackpad 2 - Just painted black because I cracked the glass. Eventually want to make a housing for the keyboard + trackpad if there isn’t an off-the-shelf approach to a USB+Bluetooth keyboard & Trackpad (I’ve researched brydge, Lenovo, and all sorts of other vendors, just not there yet)

                                            Overall the combination of a stand and HiDPI screen close to me means I no longer need to wear glasses when using the computer.

                                            On the software end, I’ve been pretty deep in using macos but things are slowly getting there on Linux between linuxtouchpad.org efforts and gnome going through some changes. Definitely some chaos tolerance but been getting used to it.

                                      1. 8

                                        My first keyboard with limited space was the Planck EZ. I would not recommend this particular keyboard, but I love the “3x6” (on each hand) form factor and have stuck with Planck’s raise/lower/adjust style layers.

                                        Instead of the thumb keys that a Planck uses, I put my layer toggle keys where the shift keys are on a regular keyboard (I have shift on a thumb key). Perhaps surprisingly, I have no trouble switching between my regular laptop keyboard and my weird thumb-shift ergonomic keyboard.

                                        Something that helps me is that my thumb keys stay the same across all of my layers. All of my modifier keys are on my left thumb, and space/enter/backspace are on my right thumb. So those are always available regardless of my layer changes.

                                        .-----------------------------------------.                .-----------------------------------------.
                                        | Tab  |   Q  |   W  |   E  |   R  |   T  |                |   Y  |   U  |   I  |   O  |   P  |  -   |
                                        |------+------+------+------+------+------|                |------+------+------+------+------+------|
                                        | Esc  |   A  |   S  |   D  |   F  |   G  |                |   H  |   J  |   K  |   L  |  ;   |  '   |
                                        |------+------+------+------+------+------|                |------+------+------+------+------+------|
                                        |Lower |   Z  |   X  |   C  |   V  |   B  |                |   N  |   M  |   ,  |   .  |   /  |Raise |
                                        |------+------+------+------+------+------+-----.   .------+------+------+------+------+------+------|
                                        | Ctrl |      |      |      | Alt  |Shift | Cmd |   |Enter |Space |Bksp  |      |      |      |      |
                                        '-----------------------------------------------'   '------------------------------------------------'
                                        

                                        Holding left shift gives me my “numbers and navigation” layer:

                                        .-----------------------------------------.  .-----------------------------------------.
                                        |      |      | PgUp |  Up  | PgDn |      |  |      |   7  |   8  |   9  |   :  |  -   |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      | Home | Left | Down |Right | End  |  |   =  |   4  |   5  |   6  |   0  |      |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      |      |      |      |      |      |  |   ,  |   1  |   2  |   3  |   .  |      |
                                        '-----------------------------------------'  '-----------------------------------------'
                                        

                                        Left hand is navigation; right hand is a numpad and some commonly-typed numeric symbols.

                                        Holding right shift gives me a punctuation layer:

                                        .-----------------------------------------.  .-----------------------------------------.
                                        |   `  |  !   |  @   |  #   |  $   |  %   |  |  ^   |  &   |  *   |      |      |  =   |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      |      |      |  (   |   )  |      |  |      |   [  |   ]  |      |      |      |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      |      |      |      |      |      |  |      |      |      |      |   \  |      |
                                        '-----------------------------------------'  '-----------------------------------------'
                                        

                                        Which is pretty familiar, apart from moving the brackets. I keep / on the base layer, so \ is the “raised” version of that. Similarly +/= sits on top of the base layer’s -/_.

                                        Holding right and left shift together gives me an “everything else” layer:

                                        .-----------------------------------------.  .-----------------------------------------.
                                        | Reset| F1   | F2   | F3   | F4   | F5   |  | F6   | F7   | F8   | F9   | F10  | Tog  |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      | F11  | F12  |      |      |      |  | Mute | Vol- | Vol+ | Br-  | Br+  |      |
                                        |------+------+------+------+------+------|  |------+------+------+------+------+------|
                                        |      | BT 0 | BT 1 | BT 2 | BT 3 | BT 4 |  | |<<  | Play | >>|  |      |      |      |
                                        '-----------------------------------------'  '-----------------------------------------'
                                        

                                        BT keys toggle different bluetooth profiles, to switch between my laptop/phone/iPad. Tog persisently switches my base layer between qwerty and workman (on my laptop I use software remapping, so my keyboard sends qwerty, but on my iDevices I send workman keystrokes directly).

                                        1. 2

                                          I thought you used Workman layout?

                                          1. 2

                                            See the last sentence :)

                                            1. 2

                                              Comment too long

                                        1. 6

                                          Some years ago I wrote my own local development web server in a few dozen lines of bash. I could never get nc to work for serving concurrent requests (i.e. html+js+css at the same time) but ncat works great.

                                          It was actually really nice for prototyping with compile-to-js languages like ClojureScript or PureScript, as well as very quickly mocking up little CRUD apps. You’d write little executables to handle all the different HTTP verbs.

                                          https://github.com/ianthehenry/httprintf

                                          i.e. start it up, POST /some/resource, and it’ll try to run ./some/resource/POST if that script exists, then fall back to parent directories. Kind of a cute little hack.

                                          1. 3

                                            I’ve enjoyed using Bernstein’s tcpserver for this exact thing. It doesn’t do TLS though.

                                            https://cr.yp.to/ucspi-tcp/tcpserver.html

                                          1. 35

                                            I have never used Zig and this will affect me in absolutely no way.

                                            But this is such a marvelous piece of technical writing that my immediate reaction was to try to read more from the author, only to be saddened to learn that this is the only thing that they have published.

                                            1. 6

                                              Agreed, totally. This is just really beautifully lucid writing.

                                            1. 3

                                              There needs to be better integration and support for higher level scripting in macOS. I should not have to learn Swift, Xcode, and a bunch of low level stuff just to make a button.

                                              1. 7

                                                I used Automator a while ago to make a little graphical frontend to ffmpeg for my partner who had to do some repetitive video processing. It was much easier and worked way better than I expected it to.

                                                1. 2

                                                  Automator is (was?) a great starting frontend. Apple is a trillion dollar company – they could and should be doing so much more on this kind of thing!

                                                2. 5

                                                  There is Automator, Shortcuts, and good old reliable AppleScript. If that doesn’t solve your problem, you can still script using unix-inspired scripting languages and still send AppleEvents back and forth. Or do you mean some other form of scripting? Maybe I misunderstood you.

                                                  1. 3

                                                    There’s a lot to respond with but I’ll try not to ramble on too much.

                                                    What I’m trying to convey is less about “solving problems” than it is presenting “users” with an environment that lets them know they can manipulate it in meaningful ways – and thereby, possibly (and easily), solve their own problems.

                                                    Automator and Applescript are great starts, but Apple has clearly not given them enough love. I cannot recall seeing any Applescript segment in any broadcasted keynote (if you know of one, please send). This even though the company is more than willing to devote elaborate segments to other highly technical features (new processors etc). Neither Applescript nor Automator are featured on Apples main website. This is one way that tells me that such scriptability of their system is a far afterthought.

                                                    If Apple wanted to take that kind of malleability seriously, they could start with the less intensive and top-level, front-facing UI of macOS. For example, there is no technical reason that the Finder shouldn’t just be made up of components that are inspectible, and whose underlying implementation is just Applescript. That should have happened a long time ago. There should be a whole unit at the company that is writing and perfecting user-facing system level Applescripts. This ensures that the necessary technical infrastructure for Applescript to have as wide a reach as possible is given priority, but also that users have a rich literature of scripts to learn from.

                                                    Applescript Studio was cool and it got axed. There should be a primitive interface builder that lets power users quickly and easily draft simple UIs that can integrate with the rest of the system, and which they can script in Applescript. There’s no good reason not to have this. And again, the Finder should be written and implemented this way – it should be the Ur-example. If a basic macOS button is not inspectible, it should be an exception and should definitely be something not written by Apple.

                                                    Apple has a lot of leverage they could use to make this better. For example, imagine if the reduced app store costs for vendors that provided rich Applescript dictionaries and integrations with their software. To me that’s a no-brainer.

                                                    Again, they are a trillion dollar company. Instead of giving users what they want – which is an easy way to make money, no doubt – they should be educating users on what is possible by giving them a rich environment to work in. What’s worse is that they have historically proven that such things are viable with things like Hypercard. SK8 is one of the more interesting research projects ever at the company. What might system level scripting and authorship look like if Apple dedicated a rounding-error of financial resources to it?

                                                1. 6

                                                  tl;dr: whoa hey look at this crazy thread about keyboard ergonomics https://community.keyboard.io/t/custom-mounts-what-are-your-ideas/495

                                                  I got sucked down the ergonomic rabbit hole after developing some pretty bad shoulder pain a few years ago.

                                                  The first ergonomic keyboard I used was an Advantage2, because it was stocked standard at my office. It is a really amazingly comfortable keyboard, and even thought they’re ugly and huge and bulky… still the most comfortable keyboard I have ever used. I moved all my modifier keys to the left thumb cluster, and space/enter/backspace to the right, but otherwise stuck to a pretty typical layout.

                                                  When I switched to working from home I bought an Ergodox EZ, because I wanted to try real “splitting,” and I was curious to explore QMK and see if further customization could improve my life, and it has the same key layout (including thumb cluster) as the Advantage2, so I thought it would be an easy switch.

                                                  It was a huge step down, comfort-wise. The Ergodox seems to be designed for someone with much larger hands than I have. I would describe my hands as pretty “average” sized for an adult man, and I found it very uncomfortable to keep my thumbs rested on the thumb cluster for extended periods of time. Since thumb keys are one of the main advantages I was enjoying from the Advantage2 keyboard, the Ergodox EZ was pretty useless. I tried it for a few months, but I basically had to abandon it. I thought of just buying an Advantage2 for myself, but I really liked the splitting and didn’t want something so bulky.

                                                  I also paid for the tent/tilt kit of the Ergodox EZ, which give you little adjustable feet that you can use to angle the keyboard. I was very disappointed these: there are very few stable positions, and even if it were possible to use the full extent of them without your keyboard falling over, it doesn’t really allow much range of motion.

                                                  My current “daily driver” is a Let’s Split keyboard. It has exactly the number of keys that I want – I always thought the Ergodox had way too many keys – except for the thumbs. I still miss the larger thumb cluster, and the ortholinear layout is really not as comfortable as the column-staggered layout of an Ergodox or an Advantage2.

                                                  Anyway long story short it’s now pretty easy to build split bluetooth keyboards where each half is entirely wireless, so you don’t have to have a cord connecting the halves. This is a nice ergonomic advantage, as being able to quickly reposition the halves as I move around my office is important to me. I have a very long TRRS cable so that the halves don’t pull each other out of position, but, well, I’d like to have no cable at all.

                                                  As someone who has never soldered anything before, this is a daunting prospect, but I decided to try it anyway, as it seems like a nice skill to develop.

                                                  I’m currently in the process of building a slightly customized Kyria with tripod mounts, as I was very inspired by some of the images in this thread about Model 01 mounts, and it is not currently possible to buy a Keyboardio keyboard.

                                                  The Kyria actually has a “first-party” tripod mount thing, but I really don’t like the design of it – it connects directly to the PCB, and there’s nothing holding it to the rest of the case except your solder joints. This comment is way too long so I won’t go into it, but I had an alternative bottom plate manufactured through SendCutSend that just arrived today and it seems to work great :)

                                                  https://imgur.com/a/eovjYz9

                                                  Anyway just buy a Kinesis if you have the ergonomic keyboard itch. Don’t end up like me.

                                                  1. 10

                                                    Going to the symphony! Beethoven’s seventh.

                                                    Also taking a stab at implementing the Boyer-Myrvold planar embedding algorithm (PDF). I still don’t really know anything about graph theory, but after lots of reading this is pretty much the only paper that I can actually follow. And once I can calculate planar embeddings, I will have the final piece I need to finish implementing The Best Procedural Level Generation Algorithm I Have Ever Seen. (The canonical implementation just uses Boost for the graph theory stuff but like that’s no fun.)

                                                    1. 4

                                                      Beethoven’s 7th is his greatest middle-period symphony IMO (probably most critics would rate the 3rd and 5th as more influential though). An almost equally great work that clearly (IMO) owes a huge debt to the 7th is Schubert’s 9th symphony.

                                                      1. 2

                                                        I never got into procedural stuff, but that’s an awesome-looking algorithm.

                                                      1. 1

                                                        This is really cool!

                                                        I recently hacked together an image-based snapshot test thingy, but I used iimage-mode so that the outputs can appear next to their inputs in the source (example). It works nicely for my use case, but it really falls down exactly in the situation you’re describing – when a lot of images change at once.

                                                        You’ve inspired me to make a bulk-side-by-side compare thingy. I think it would be easier if there were two side-by-side buffers with locked scrolling, one with the all of the old images and one with all of the new, so you could quickly scan over all of them for mistakes. I feel like I’d make a mistake with the y/n at some point. But I haven’t tried it, so that might not be true in practice.

                                                        1. 1

                                                          I used iimage-mode so that the outputs can appear next to their inputs in the source (example).

                                                          Ah neat. I didn’t think of inline display. Btw, just saw your post from the screenshot, and in there… “But by sprinkling a little bit of Emacs fairydust on it…” Similar goal, different language, same dust :)

                                                          You’ve inspired me to make a bulk-side-by-side compare thingy. I think it would be easier if there were two side-by-side buffers with locked scrolling

                                                          Ah yeah. That may work.

                                                          I feel like I’d make a mistake with the y/n at some point.

                                                          Hasn’t happened yet, but I’m hoping versioning control somewhat safeguards things (by me taking a last look before committing).

                                                          ps. Pretty sure I used iimage before to render inline images from eshell.

                                                        1. 3

                                                          A few weeks ago I started writing a side project diary about making a little game. I thought maybe I could publish some interesting things I learned about Raylib in time for the Autumn Lisp Game Jam: gotchas around Raylib’s support for vertex normals, the surprising split between meshes and primitive shapes and how that affects drawing order, a neat technique for dynamic lighting I learned that you can implement in a couple dozen lines of code… things like that.

                                                          But now the game jam is coming to an end, and while I’ve been working on the series for a while, I have yet to actually publish anything (directly) game-related. But this weekend I’m going to correct that, and finish up a post about some vector techniques that I learned to avoid calling trigonometric functions in a game loop. I’m excited because it was the first post I actually wrote for the series, but is going to end up being the… seventh post that I publish. Whoops.

                                                          1. 8

                                                            I remember being completely blown away by Jane Street’s code review tool “Iron,” which was simultaneously much simpler and much nicer to use than any review tool I’d encountered before it.

                                                            I went through this list to see how it stacked up – it can’t do everything, but it makes a pretty good showing. Here’s what it’s missing:

                                                            Ability to view diffs in a work-in-progress state before sending them out for review.

                                                            Iron does this, but in a stronger way: before you can send something out for someone else to review, you have to review (and “approve”) the code yourself.

                                                            Ability to comment on a specific substring of a line.

                                                            Have to simulate this with English words.

                                                            Reviewers can signal the degree of their confidence in the code (e.g. “rubber stamp” < “looks good to me” < “full approval”)

                                                            Iron only understood “we should merge this to parent” or “we should not merge this to parent,” which might reflect different degrees of confidence depending on the parent branch. (Of course this confidence could be expressed in high resolution to other people, but I’m not sure what tooling support would look like.)

                                                            Automated “Round robin” reviewer assignment within a team.

                                                            Not supported out of the box, but there were teams that scripted this.

                                                            Code review history (including review comments) is saved somewhere, preferably indefinitely.

                                                            Iron saves all comments forever, but not in a very easy-to-rediscover way. When I left people were working on a feature to make this more easily searchable/browseable. So it did do this, but the UX was bad enough that it basically didn’t. Probably the biggest missing feature; hopefully fixed now.

                                                            Ability to rollback / revert a submitted diff from within the tool.

                                                            Had to be done manually, and it was a bit of a hassle.

                                                            Opt-in setting to auto-merge the PR once it’s been sufficiently approved.

                                                            This would have been useful, especially for things like config changes.

                                                            Ability to customize when a given CI workflow is run (e.g. upon initial review request, upon each update, upon submission).

                                                            Couldn’t do this as far as I know. Everything always run on every (debounced) push.

                                                            Customizable per-diff presubmits. Example: “Prevent this submission of this diff until PR #123 is deployed in prod”)

                                                            Iron has the concept of “locks,” where you say things like “you can’t release this until X.” But “X” is a string of English text understood by humans, not something the tool was aware of. So a human could remove the lock erroneously.

                                                            There was another interesting feature where you could tag certain commits as “introducing a bug” and other commits as “fixing a bug” and safety guards in place to prevent you from deploying software known to contain bugs but not contain their fixes, which sort of overlaps with this idea.

                                                            I think that’s it? I think it ticks every other box. Well, except…

                                                            For the purposes of this post, “code review tool” refers to a web UI for reviewing code

                                                            So it wouldn’t count as a code review tool by this definition… but I’m alright with that. :)

                                                            I gave a talk a few years ago describing how it works, if anyone is curious to see a slightly different approach. https://www.youtube.com/watch?v=MUqvXHEjmus

                                                            1. 2

                                                              Iron was never publicly available, was it? I remember seeing talks and videos, but I don’t remember actually seeing elisp. Might’ve happened after I quit using Emacs, though.

                                                              1. 2

                                                                No; Iron was never really released to the public. For some reason Jane Street used to publish the source code: https://github.com/janestreet/iron

                                                                But as far as I know it was never actually buildable externally, because it had some in-house dependencies (like… the build system) that were not open sourced. Also the Emacs client was a separate project, and as far as I know was never published at all.

                                                              2. 2

                                                                Ability to rollback / revert a submitted diff from within the tool.

                                                                Had to be done manually, and it was a bit of a hassle.

                                                                This is built!

                                                                Code review history (including review comments) is saved somewhere, preferably indefinitely.

                                                                Probably the biggest missing feature; hopefully fixed now.

                                                                Oh, honey

                                                              1. 3

                                                                I would like to keep poking my SDF / Raymarching renderer. I have the initial fragment shader written, but it only runs in a shadertoy and I’d like to actually use it for things. Figure I’m going to use this as an excuse to learn Nim. I use Haskell at work and it’d probably be fine, but then it might feel like work. :p

                                                                I’m curious to see what I can build with it.. Current goals are a tiny game of some kind, and possibly a UI framework.. utilitarian, but fast and tiny. Since I can combine it with SDF font rendering for some interesting possibilities.

                                                                1. 2

                                                                  I would love to see an SDF-based UI framework — I’ve thought a tiny bit about this, but haven’t actually tried anything. Very curious if there’s a nice way to deal with color or images without giving up the wealth of shape combinators.

                                                                1. 7

                                                                  This is a fascinating read! I had no idea this was possible.

                                                                  However, I would caution against generalizing here to lisps more broadly; the ability to embed a function value directly in a macroexpansion seems to be a quirk of CL and Janet as far as I can tell; even other lisps sharing close ancestry with CL like Emacs Lisp don’t support it.

                                                                  1. 6

                                                                    Turns out I had made a typo and it does in fact work in Clojure.

                                                                    However, the rationale for doing it does not really apply in Clojure since the macro system integrates smoothly with the namespace system, and backquote fully-qualifies all symbols by default with the namespace in which the intended function is found, so while it’s possible to use this technique, it’s a solution for a problem that doesn’t exist; introducing shadowed names in the context of the macro caller cannot cause the macroexpansion to resolve to the wrong function.

                                                                    1. 3

                                                                      Or is the implicit namespace-qualification a solution to a problem that doesn’t exist? :)

                                                                      Common Lisp does the same thing, actually — maybe Clojure copied this from Common Lisp (?). It is a totally valid solution, but (at least in Common Lisp; not sure if Clojure does something more clever) you can still run into issues if your macros are defined in your own package, reference functions in that same package, and are also expanded in that same package — everything is in the same namespace. Which like… yeah then you oughtta know what your macros look like, I guess. But “lexically scoped macros” or whatever work regardless of the namespace structure.

                                                                      (Also, strong caveat: I have no idea what I’m actually talking about and am basing those statements on what I read in On Lisp and have never written production lisp in my life.)

                                                                      1. 5

                                                                        It is a totally valid solution, but (at least in Common Lisp; not sure if Clojure does something more clever) you can still run into issues if your macros are defined in your own package, reference functions in that same package, and are also expanded in that same package — everything is in the same namespace.

                                                                        Yeah, this doesn’t happen at all in Clojure. Even if you’re referencing something from the current namespace it gets fully expanded into an unambiguous reference in the quoted form. It’s basically impossible to write an unhygenic macro in Clojure unintentionally.

                                                                        1. 6

                                                                          It has its weird issues, though. You can unintentionally write a macro that doesn’t want to expand due to hygiene errors:

                                                                          (ns foobar)
                                                                          
                                                                          (def x 10)
                                                                          
                                                                          ;; ...Perhaps a lot of code...
                                                                          
                                                                          (defmacro foo [arg]
                                                                            `(let [x 1]
                                                                               (+ x 1)))
                                                                          

                                                                          If you try to use foo, it will complain that the x in the let bindings is not a “simple symbol” (because it gets expanded to (let [foobar/x 1] (+ foobar/x 1)) which is thankfully not valid). And fair enough, you will hit this issue as soon as you try to use the macro, so it should be relatively easy to debug.

                                                                          Also, the system breaks down when you’re trying to write macro-writing macros. Something like this simply fails with the same error, that foo is not a “simple symbol”:

                                                                          (defmacro make-foo []
                                                                            `(defmacro foo [arg]
                                                                               `(let [y 1]
                                                                                  (+ y 1))))
                                                                          

                                                                          The same happens if you change make-foo to accept the name of the macro but still use quasiquotation (not exactly sure why that is, though). The only thing that seems to work is if you convert the let to a manual list building exercise:

                                                                          (defmacro make-foo [name]
                                                                            (let [y-name 'y]
                                                                              (list 'defmacro name ['arg]
                                                                                 (list 'let [,y-name 1]
                                                                                       (list '+ y-name 'arg)))))
                                                                          
                                                                          (make-foo bar)
                                                                          (bar 2) => 3
                                                                          

                                                                          But this breaks down as soon as you try to pass in identifiers as arguments:

                                                                          (let [x 1] (bar x)) ;; Error: class clojure.lang.Symbol cannot be cast to class java.lang.Number
                                                                          
                                                                          1. 2

                                                                            You can unintentionally write a macro that doesn’t want to expand due to hygiene errors:

                                                                            That’s kind of the whole point; you made an error (bound a symbol without gensym) and the compiler flagged it as such. Much better than an accidental symbol capture.

                                                                            Something like this simply fails with the same error, that foo is not a “simple symbol”

                                                                            Yeah, because it’s anaphoric. The entire system is designed around getting you to avoid this. (Though you can fight it if you are very persistent.) The correct way to write that kind of macro is to accept the name as an argument (as you did in the second version) but your second version is much uglier than it needs to be because you dropped quasiquote unnecessarily:

                                                                            (defmacro make-foo [name]
                                                                              `(defmacro ~name []
                                                                                 `(let [y# 1]
                                                                                    (+ y# 1))))
                                                                            
                                                                            1. 3

                                                                              Thanks for explaining how to make this work, I stand corrected!

                                                                          2. 3

                                                                            That’s an elegant solution to hygene. I might have to give this Clojure language a try, it sounds pretty great!

                                                                            Are there other Lisps that work this way, or is Clojure unique in this regard?

                                                                            1. 4

                                                                              Both Clojure and Common Lisp’s Macro systems seem like a huge kludge after learning syntax-case.

                                                                              1. 2

                                                                                Fennel works similarly in that it prevents you from using quoted symbols as identifiers without gensym/auto-gensym. However, it does not tie directly into the namespace system (because Fennel is designed to avoid globals and its modules are very different from Clojure namespaces anyway) but works entirely lexically instead, so if you want a value from a module, your macroexpansion has to locally require the module.

                                                                                https://fennel-lang.org/macros

                                                                            2. 1

                                                                              What happens in Janet if you rebind the injected variable to a different value? It seems to me that this shouldn’t work in the general case. Also, I don’t see how this could work if you inject a variable which is declared later in the file.

                                                                              1. 1

                                                                                Janet inline values, you can’t redefine something that isn’t specifically a var - if it is a var, it is accessed via indirection.

                                                                        1. 10

                                                                          This is how operators work in Swift, and it is better.

                                                                          https://docs.swift.org/swift-book/LanguageGuide/AdvancedOperators.html#ID46

                                                                          Swift goes one step further and has a concept of “precedence groups,” which make it easy to define new operators (without needing to write/modify comparePrecedence). It nicely matches the way my brain thinks about operator precedence: “multiplicationy things bind tighter than additiony things.” If you make a new infix operator, you can just say “this should behave like * and /.” And you can, of course, define your own precedence groups.

                                                                          https://docs.swift.org/swift-book/ReferenceManual/Declarations.html#ID550

                                                                          So there’s like a topological sort thing happening there, and it allows you to say “this operator can only mix and match with these other operators,” and you have to add parens to disambiguate weird expressions. It’s really neat!

                                                                          1. 4

                                                                            Trying to wrap my head around Lisp macros. I have long had a misconception that that Lisp-2s existed as a weird compromise to allow macros to still be fairly useful in the face of lexical scoping. But I have recently seen evidence that, in fact, the opposite is true, and it is much easier to write correctly behaved macros in a Lisp-1.

                                                                            I am not a Lisp person, so I’m coming at this pretty blind. I’ve been reading papers about the history of Lisp, and trying to understand where my misconception came from. So far I’ve seen this claim repeated in a few places, but nowhere that includes an example of the “right” way to reconcile lexical scope and quasiquoting. So I have a lot more reading to do…

                                                                            1. 1

                                                                              This really doesn’t have anything to do with Lisp-1 vs Lisp 2 so much as it has to do with hygienic vs non-hygienic macros. Your misconception might stem from the fact that the most common Lisp 2 also has a non-hygienic macro system and the most common Lisp-1 (Scheme) tends to have hygienic macro systems. I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments. Typical programs don’t introduce a lot of function bindings except at top or package level.

                                                                              1. 2

                                                                                This is a very reasonable assumption, but in this case I was only thinking about “classic” quasiquote-style macros, and how they differ in Lisp-1s and Lisp-2s.

                                                                                I think the idea that Lisp-2 makes it “easier” to deal with non-hygienic macros probably has to do with the fact that if you separate the function environment from the regular variable environment, then it is often the case that the function environment is much simpler than the variable environments.

                                                                                Yeah, that matches my prior assumption. I was very surprised when I learned how a modern Lisp-1 with quasiquote handles the function capture problem – far more elegantly than the separate function namespace. Then I learned that Common Lisp can do the same thing (in a much more ugly way), and I was very surprised that it is not just the canonical way to deal with unhygienic macros. Now it seems like more of a historical accident that Lisp-2 are considered (by some people) “better” for writing unhygienic macros than Lisp-1s.

                                                                                I’m probably not explaining this well. I ended up writing a blog post about my findings that is very long, but does a better job of explaining my misunderstanding.

                                                                                https://ianthehenry.com/posts/janet-game/the-problem-with-macros/

                                                                              2. 1

                                                                                Have you had a look at Common Lisp yet? I’m learning macros there and it seems straight forward.

                                                                                1. 2

                                                                                  Yep! I’m using Common Lisp as my prototypical Lisp-2 as I try to work through and understand this.

                                                                                  The thing I’m having trouble with is that if you want to call a function in a macro expansion, you have to do the whole funcall unquote sharp quote dance, or risk the function being looked up in the calling code’s scope. It seems CL tried to make this less necessary by saying there are certain functions you cannot shadow or redefine, so you only actually have to do this with user-defined functions, but that seems like such a big hack that I must be missing something.

                                                                                  1. 1

                                                                                    Its the same thing with variables. Common Lisp macros just don’t know anything about lexical scope. In fact, arguably, they don’t even operate on code at all. They operate on the union of the set of numbers, symbols, strings and lists of the other things. Code denotes something, but without knowledge of the lexical context, the things CL macros transform cannot even come close to being “code”.

                                                                                    This is why I like Scheme macros so much. They operate on a “dressed” representation of the code which includes syntactic information like scoping, as well is useful information like line number of denotation, etc. By default they do the right thing and most schemes support syntax-case, which allows you to have an escape hatch as well. I also personally find syntax-case macros easier to understand.

                                                                                    1. 1

                                                                                      Yeah, I really hate that approach

                                                                                1. 12

                                                                                  The title of this article really buries the lede: it’s about building a self-modifying unit test library with Janet macros, which is actually really cool. Running a test generates a new file, with expected values replaced with actual ones. Writing tests this way is similar to working in a REPL.

                                                                                  Are there any other languages with a test system like this?

                                                                                  1. 10

                                                                                    (author here)

                                                                                    Probably! Lots of languages have some kind of snapshot test library, but I don’t know of any framework that works exactly like Judge. I assume that is due to ignorance, though; I can’t claim to have any particular knowledge of the domain.

                                                                                    The .corrected file just-look-at-a-diff thing is taken straight from Jane Street’s expect test framework, which I think got the idea from Mercurial’s unified tests – see cram for a generic description. cram uses .err instead of .corrected, but whatever.

                                                                                    But ppx_expect doesn’t serialize values or work on arbitrary expressions. The way it works is by redirecting stdout for the duration of the test, and by inserting string literals into the source code.

                                                                                    This makes some sense in OCaml because there is not really a canonical way to serialize values – even though there is a de facto standard at Jane Street – so having the user produce strings is reasonable.

                                                                                    (The stdout capture is also extremely useful, because it allows you to use literal printf debugging deep in your application code, and see the output show up as you run your tests. It’s also very useful to have tests that produce ASCII tables or whatever. It’s good for a lot of reasons, but of course it’s easy to do stdout redirection yourself, and you can easily implement the ppx_expect approach on top of the “view a value” primitives.)

                                                                                    But apart from that difference, the workflow you see in the post matches the ppx_expect workflow very closely. Here are some other examples:

                                                                                    But I don’t know what the ergonomics of those are like; I don’t know how easy it is to write tests in this REPLy-style using those libraries. The only one I’ve actually tried is k9, and while it did support “inline snapshots,” at least at the time it didn’t write .corrected files. It will still show you failing tests, but… I really like being able to bring my own diffs to the table. Let me use emerge or delta or vimdiff or whatever else.

                                                                                    1. 2

                                                                                      The first place I heard of this approach was Jane Street’s expect test framework for OCaml.

                                                                                    1. 21

                                                                                      Someone shared a visualization of the sorting algorithm on ycombinator news.

                                                                                      PS: Really don’t enable sound its loud and awful

                                                                                      1. 8

                                                                                        Yeah this page is cool, and it shows that this “naive sort” (custom) is close but not identical to insertion sort, which is mentioned at the end of the paper.

                                                                                        And it also shows that it’s totally different than bubble sort.

                                                                                        You have to click the right box and then find the tiny “start” button, but it’s useful.


                                                                                        I recommend clicking these four boxes:

                                                                                        • quick sort
                                                                                        • merge sort
                                                                                        • custom sort (this naive algorithm)
                                                                                        • bubble sort

                                                                                        Then click “start”.

                                                                                        And then you can clearly see the difference side by side, including the speed of the sort!

                                                                                        Quick sort is quick! Merge sort is better in the worst case but slower on this random data.

                                                                                        1. 1

                                                                                          cycle sort is pretty interesting too!

                                                                                          1. 1

                                                                                            I thought (this is 15 year old memories) that what made merge sort nice is that it isn’t particularly dependent on the data, so the performance isn’t really affected if the data is nicely randomized or partially sorted or whatever, whereas quicksort’s performance does depend to some extent on properties of the input sequence (usually to its benefit, but occasionally to its detriment).

                                                                                          2. 7

                                                                                            If you are playing with this website: when you change your selected sorts, press “stop” before you press “start” again. Otherwise both sorts will run at the same time, undoing each other’s work, and you will wind up with some freaky patterns.

                                                                                            This comment is brought to you by “wow I guess I have absolutely no idea how radix sort works.”

                                                                                            1. 7

                                                                                              Yeah the radix sort visualization is cool!

                                                                                              The intuition is if you have to sort 1 million numbers, BUT you know that they’re all from 1 to 10. What’s the fastest way of sorting?

                                                                                              Well you can do it in guaranteed linear time if you just create an array of 10 “buckets”. Then make a single pass through the array, and then increment a counter in the corresponding bucket.

                                                                                              After that, print out each number for the number of times it appears in its bucket, like

                                                                                              [ 2 5 1 ... ]   ->
                                                                                              1 1 2 2 2 2 2 3 ...
                                                                                              

                                                                                              etc.

                                                                                              I think that lines up with the visualization because you get the “instant blocking” of identical colors. Each color is a value, like 1 to 10. (Watching it again, I think it’s done in 3 steps, like they consider the least significant digits first, then the middle digits, then the most significant digits. It’s still a finite number of distinct values.)

                                                                                              There are variations on this, but that’s the basic idea.

                                                                                              And it makes sense that it’s even faster than QuickSort when there are a limited number of distinct values. If you wanted to sort words in a text file, then Radix sort won’t work as well. There are too many distinct values.

                                                                                              It’s not a general purpose sorting algorithm, which is why it looks good here.

                                                                                              1. 4

                                                                                                Oh, yeah — I meant that I started radix sort while the custom sort was still running, and it just kind of scrambled the colors insanely, and it took me a few minutes of thinking “dang and here I thought radix sort was pretty simple” before I realized they were both running at the same time :)

                                                                                            2. 1

                                                                                              Nice visualisation, though it does make some algorithms (selection sort) look better than they are!