1. 4

    This is excellent! I personally find this style of incrementally showing changes to be among the best ways of teaching. I wonder how viable it is to apply such a technique to larger-scale projects, ones with many files and more complicated histories.

    It seems as though each step is generated from a commit diff. It wouldn’t be that hard to handle multiple files in the same manner. However, are there situations in which a “change” is better understood by viewing the result of a series of commits, rather than one at a time? Does the order in which you present changes from a single commit matter? Is there extraneous information to be scrubbed, even if it occurs in the diff?

    I’m thinking of, say, demonstrating a moderately sized refactor. If there are a lot of mechanical changes, perhaps you don’t want to pollute the reader’s mind with each and every one of them. The diff doesn’t seem to provide that much control.

    Thank you for sharing!

    1. 3

      When I think of the phrase “spooky action at a distance” with respect to programming, the thing that always comes to my mind is mutable state. I know of no better analogy within programming to quantum mechanics’ “spooky action at a distance” than mutable state, though admittedly I know next to nothing about quantum mechanics. Mutable objects in programming seem a lot like objects that have been “quantum entangled”. I think about this a lot when I have reason to share a piece of mutable state between two objects, such as when writing a unification-based type system (which one of my current projects includes). I find it interesting to think about how the type of some expression, as a type variable, can propagate down different branches of a program tree. Then at some point the type checker unifies the type on one branch to something (more) fully specified, and suddenly, spookily, the type of expressions in a distant branch are similarly specified.

      Not that I think mutable state is inherently bad – it’s a great engineering tool that is often overly maligned by functional programming purists. But I do think, just like good design of countless other things in programming, exactly how to use mutable state with clarity rather than confusion requires good taste and judgment. (This is not a claim that I necessarily have the best taste and judgment.)

      Now I’d like to nitpick a statement from the OP, even though others are commenting on the same statement:

      x + y

      If this were written in C, without knowing anything other than the fact that this code compiles correctly, I can tell you that x and y are numeric types, and the result is their sum.

      In C, without using a particular compiler that specifies particular semantics beyond the standard, you can not know the result (or even the behavior of the surrounding code or entire program!) of a + b without knowing the dynamic state of the running program, because a + b can result in undefined behavior. There is no programming language more spooky than C.

      1. 2

        I don’t think it’s mutable state by itself that’s the problem, it’s aliased mutable state. In C, I can write code like this:

        int *a = &foo;
        int *b = a + 42;
        b[2] = 12;
        

        And this changes the value of a[54], even though I never used the name a in my mutation. That’s action at a distance and, to me, a good language should provide some static type information to tell you when things might be aliased (neither C nor C++ does a good job here).

        Aliasing between concurrent execution contexts is the worst case of this. In C, there’s no protection against this at all and (worse) the language says it’s undefined behaviour if two threads concurrently update a variable and it isn’t _Atomic qualified. Shared mutable state is the worst possible kind of spooky action at a distance: you can step through a thread one instruction at a time and still see a result that you couldn’t explain from looking at that thread’s code. This is why Verona makes it impossible to express concurrently mutable state.

        1. 1

          I agree, without mutable state a function can be efficient, or inefficient, but never really spooky.

          1. 1

            Regarding your mention of unification state propagating through different branches of the program tree: you may be interested in this paper, which defines a type system that’s strictly compositional (the type of an expression is based entirely on the type of its subexpressions).

          1. 3

            I’m kind of disappointed, since I was expecting something totally different based on the title.

            Having done some minor work on Android applications throughout my high school and university days, I can name a much more significant problem with gradle than Groovy’s syntax: it’s slow! This is mentioned only in passing in this article, but Gradle is painfully slow, and even the smallest, non-android Java projects take a significant amount of time to build, every time. It’s awful if you’re trying to iterate or experiment! I’ve heard this complaint from others, too.

            I really don’t buy into the provided criticisms of Groovy’s syntax. To me, the project definitions are quite readable, and I don’t see many reasons to think about how my project description is computed (and thus how and when the lambdas are invoked). It matters in the case of side effects like printing messages, but then, where did you expect a message to be printed? If you think of a task lambda as “where you describe a task” instead of “the task”, then it’s not all that surprising that I/O happens at configure time.

            And then there are complaints about objects that are “just there”, like tasks and ext. If Gradle is a domain specific language, then these are just its “standard library”. print is “just there” in Python, Math modules are auto-imported in many languages, Make has the “phony” special case. Why should tasks be treated differently? Not being familiar with a language’s standard library is not a good reason to be complaining about the language.

            The variable scoping mechanism is also not that unusual. In, say, Ruby, you can also access local variables from a lambda / block, but not from a function definition. Indeed, the former creates a closure (which, hey, is exactly what the Groovy guys call it!), and that maintains access to the variables that were around when it was declared, while the latter creates a function, which does not have access to the surrounding variables. Having to use a class definitely is a limitation of Groovy, but then, having static fields in that class makes sense, since static variables is precisely how you make “globals” in Java. And if you want a variable accessible from all functions in your file, is that not a global?

            I do agree with the “one way to do things” sentiment, though. Groovy seems to provide a lot of flexibility in expressing even the most minute things, which can be paralyzing for beginners and frustrating for people working in teams. Unfortunately, most languages flexible enough to be bent into a build system will probably be flexible enough to allow many different approaches to solving problems.

            1. 2

              Slow builds suck! There are a few things you can/should do to fully unlock Gradle’s potential:

              Gradle has two phases: configuration and execution. The configuration phase has to run always, so make sure you you don’t have any expensive calls that get run there. This happens often when ppl write imperative stuff into e.g. their task configuretions. Make sure you don’t do that, instead ideally use plugins via buildSrc to contain the imperative logic. Also the latest versions ship with a cache for the configuration phase.

              Make sure build caches, parallel builds in Gradle, and incremental options for your compilers are enabled.

              If you are writing your own tasks, make sure their inputs and outputs are well defined. Gradle can then cache them. You can run your builds with the –info flag to see the reasons for why Gradle is considering a task out-of-date. Maybe some of your tasks are non-deterministic? That can easily happen but once you know the reason it’s also often easily fixed.

              Getting the caching working and minimizing work done in the configuration phase are the main ingredients to get faster builds for iterating.

              For larger builds, you might want to give the Gradle daemons larger heaps (I think 256m is the default setting?) so they don’t get thrashed by GC.

              Best of luck :)

            1. 4

              Here’s mine! http://imgur.com/a/Rz2wMk9

              I’m currently away from home, so my picture is a little old, but nothing has changed. I use i3+polybar and a Dell XPS.

              By the way, the XPS is a terrible computer and only god can help you if anything goes wrong with it. I had 6 consecutive trips to their repair location, with one repair breaking another component. I gave up after the 6th, because they replaced the laptop, and the microphone and trackpad broke within months.

              1. 1

                Curious. Do you manage to prevent your mic from picking up the keyboard noise? What keyboard is that?

                1. 2

                  I just use push to talk. The keyboard is loud enough to be audible wherever I put the mic. It’s a Cooler Master Masterkeys Pro S, as dumb as that name sounds :P

              1. 76

                Is there any way to support Lobsters’ hosting? Now that the server is not donated, I wouldn’t mind chipping in a little bit every month to subsidize operating costs.

                1. 7

                  +1 to this, I’d donate!

                  1. 5

                    +1 I would too.

                    1. 3

                      Ditto. I can provide Sysadmin expertise and/or moderator time as well, if desired.

                      1. 11

                        Ever hear the internet rule of thumb: “Anyone who asks to be a moderator should never be made one.” ?

                        1. 3

                          Ditto re: sysadmin / SRE work.

                          1. 3

                            Ditto, can volunteer with SRE work

                          2. -1

                            +1

                          1. 4

                            It would be really nice if Digital Ocean let you upload arbitrary ISO files and go from there, but that is apparently not the world we live in.

                            My cloud VM is a NixOS on DigitalOcean. I can dig up the details of how that works if you want @cadey. I build a NixOS VM with some config stuff for DO, upload the image, and run that.

                            1. 1

                              I have a NixOS server on DigitalOcean. nixos-infect worked wonderfully, I just copied my configuration files on and up it went.

                            1. 5

                              I write Haskell for my research work, and I was planning on doing this year’s advent of code in it. However, since I was planning on trying to go fast (my goal was to make top 100 at least once this year), I quickly gave Haskell up when I solved a few of the 2019 problems for practice. It just felt like a lot of these problems lend themselves very well to imperative solutions, and these solutions are very hard to represent in Haskell. Good on you for sticking all the way to the end!

                              You mention zippers in your post. I don’t know if you know this, but I heard that you can derive the type of a zipper by “taking the derivative” of a data structure type. For instance, suppose we represent a unit type as 1, a sum type using +, and a product type using *. To represent Either a b we can write a+b. We can write a list as l(a) = 1 + a*l(a), or (1+a)l(a) = 1 or l(a) = 1/(1+a). Taking a derivative of this – in the calculus sense! – yields (1/(1+a))^2, or l(a)^2. A zipper of a list is, indeed, two lists! Apparently, this is true in general.

                              Finally, Haskell’s parser combinator approach was so intuitive to me that I implemented a version of it in my language of choice, Crystal, for one of the Advent problems.

                              Thanks for sharing!

                              1. 2

                                Thanks for the kind words! I’m aware of the derivative approach for zippers.

                                1. 2

                                  (Regarding the zipper derivative.) That’s incredibly awesome. Truly, math is cool. I took a category theory class a while ago but never imagined you could do stuff like this.

                                1. 12

                                  As @andyc points out, I think it’s bad advice to tell beginner programming language creators to start with C/C++ and Lex and Yacc. The words “extraneous cognitive load” rush to mind immediately, especially for the first goal of a “prototype implementation”. When I first started working on compilers in high school, I only knew C and some C++, and I wasted so much time writing the most minute algorithms and transformations, messing with pointers and memory allocation and reference cycles and segmentation faults. If you want to design a brand new language, these should be the least of your worries. Dealing with these issues just distracts you from designing your actual language.

                                  It was not until I took a programming languages course in university that I realized just how much easier it is to write programming languages in something like Haskell or OCaml: I can write an imperative language with functions, closures, different data types and much more in about an hour; it would still take me days to make the same work in C++, and although I’m no professional C++ developer, I doubt that it’s purely a matter of skill. The ability to rapidly prototype and explore ideas for your language is a huge positive, since it drastically lowers the cost of making an incorrect design decision. Even Rust, a very much imperative language, had OCaml as its first compiler implementation language.

                                  I wonder if the article’s author has ever tried functional languages, especially as metalanguages, and if they did, I wonder what they thought of them.

                                  1. 8

                                    I agree that C/C++ and Lex/Yacc is a bad idea. It reeks of cargo culting – there is generally no reason to write any new code for a general purpose machine in C/C++. Sure, you can LARP the 70ies, but it detracts massively from the article.

                                    Also, writing a self-hosting compiler is neither a requirement nor particularly useful; it certainly doesn’t deserve it’s own bullet point.

                                    I sincerely hope people avoid doing the things that are recommended in the article. Perhaps “(1990)” could be added to the title, such that people see at a glance that the advice is 30 years out of date?

                                    1. 7

                                      Author here. First of all, C and C++ are completely different programming languages. Please do not conflate them. I didn’t mention C++ at all in my article, and I will reply to you from the C perspective only.

                                      I think it’s bad advice to tell beginner programming language creators to start with C/C++ and Lex and Yacc. The words “extraneous cognitive load” rush to mind immediately, especially for the first goal of a “prototype implementation”. When I first started working on compilers in high school

                                      Quotes from my article:

                                      1. “Designing and implementing a new programming language from scratch is one of the most challenging tasks a programmer can undertake.”
                                      2. “This post is targeted at motivated programmers who want to make a serious attempt at designing a useful programming language.”
                                      3. “You [the reader] already know a few programming languages[..]”

                                      This is not geared towards “beginners”. This is geared towards someone who is making a serious attempt at a language which they believe will be practical and useful beyond their own personal needs as a learning project. I don’t expect such a programmer to get bogged down in learning C, because I expect such a programmer to have already learned C, perhaps to an expert level.

                                      The ability to rapidly prototype and explore ideas for your language is a huge positive, since it drastically lowers the cost of making an incorrect design decision.

                                      I spoke to much of the same advantages in the article when talking about the sacrificial implementation. You can write this in whatever you want, and OCaml is a fine choice. I only mention C for the second compiler, which would evolve to become the bootstrap compiler for your future, self-hosted toolchain. Having a bootstrap compiler written in C has a lot of practical benefits, which I could elaborate on more in a future article, but at this point you are no longer in the prototyping phase where the advantage of rapid design iteration is entirely necessary. Hell, I also stated that you should be writing a specification at this point.

                                      I wonder if the article’s author has ever tried functional languages, especially as metalanguages, and if they did, I wonder what they thought of them.

                                      Yes, I have. I hate them passionately.

                                      1. 5

                                        This is not geared towards “beginners”. This is geared towards someone who is making a serious attempt at a language which they believe will be practical and useful beyond their own personal needs as a learning project. I don’t expect such a programmer to get bogged down in learning C, because I expect such a programmer to have already learned C

                                        Knowing languages doesn’t make you any better at designing programming languages. You are, for all intents and purposes, still a “beginner” when it comes to compiler implementation until you’ve written a compiler. So, you are still talking to people who are “beginners”, just not beginner programmers. Furthermore, it’s not about “not knowing C”. You won’t get bogged down because you don’t know the language; you will get bogged down because the language is made to require far more precision than is needed for implementing a language.

                                        You can write this in whatever you want, and OCaml is a fine choice. I only mention C for the second compiler . . .

                                        I assumed that you would want the sacrificial implementation written in C because you mentioned Yacc, which ismade for the C and C++ languages.

                                        Yes, I have. I hate them passionately.

                                        May I ask why? And also, may I ask why you think C is good for writing languages?

                                        1. 5

                                          Knowing languages doesn’t make you any better at designing programming languages.

                                          But not knowing languages does make you worse at this.

                                          You are, for all intents and purposes, still a “beginner” when it comes to compiler implementation until you’ve written a compiler.

                                          This is why you should be prepared to throw your first one away. It has nothing to do with your C skills.

                                          You won’t get bogged down because you don’t know the language; you will get bogged down because the language is made to require far more precision than is needed for implementing a language.

                                          I don’t agree that this is a symptom of using C.

                                          I assumed that you would want the sacrificial implementation written in C because you mentioned Yacc, which ismade for the C and C++ languages.

                                          I happened to write my sacrificial compiler in C, using yacc, but any parser generator would do. My article only mentions it as one possible option for this use-case. I understand how you could interpret it differently, though. To be clear: I don’t think it matters what language your first compiler is written in. You’re going to throw it away.

                                          May I ask why [you don’t like functional programming languages]?

                                          This is a deeper question than I have the patience to answer in an unrelated discussion.

                                          May I ask why you think C is good for writing languages?

                                          Again, it doesn’t really matter what language you use for the sacrificial compiler. But I do think that C has some advantages for a second compiler. It is universally supported, which makes your language easy to bootstrap. It’s also quite fast, so as long as your algorithms are fast then your compiler will be fast. Writing parsers, representing an AST, representing your type systems, this is all fairly straightforward to do in C. It does have its limitations and it would get annoying if you tried to solve every problem in the language design space with C, but that don’t really matter for a simple compiler. Simple is all this compiler will ever need to be, because its primary purpose is to build your third compiler. Your third compiler should be written in your own language, and at that point it’s on you to make your langauge suitable for any level of advanced compiler development.

                                          1. 3

                                            May I ask why [you don’t like functional programming languages]?

                                            This is a deeper question than I have the patience to answer in an unrelated discussion.

                                            As a regular (and happy) OCaml user, I’d be interested in your opinion as well. If you could write something about it on your blog some day, that would be great.

                                        2. 4

                                          I hate [“functional languages, especially as metalanguagues”] passionately.

                                          Can I ask for more information on this perspective? I’ve found functional programming protects me from so many bugs when mutating state, and I’m a little surprised that you are not on the hype train slash bandwagon.

                                      1. 8

                                        This is great, and I’ve heard good things about module systems. But how does all this work in practice? What are the applications of modules (or what approaches to programming do they facilitate)? From an external perspective, it seems like a neat gimmick.

                                        1. 6

                                          In practice, how it works in OCaml:

                                          • Every source file automatically becomes a module, e.g. note.ml becomes a module Note in the project
                                          • You don’t need to import any modules; all modules in the same compilation unit are implicitly available. There are namespacing rules to manage module names globally
                                          • Syntactic modules (defined by module Foo = struct ... end blocks) are cheap and easy to create, and semantically equal to source file modules, which encourages spinning up nested modules to namespace things properly inside source files
                                          • It’s customary to prefix values, with their modules, so you’ll see a lot of List.map, Array.map, etc., instead of Haskell-style fmap
                                          • Interfaces allow managing the visibility of module contents in a very fine-grained way, e.g. which members to display, whether to make types concrete or abstract. They even allow quickly spinning up new types with the same memory representation i.e. no boxing
                                          • Module-to-module functions (functors) are the way to do programming-in-the-large, i.e. generic programming, dependency injection, and many other techniques

                                          The module system is, from a high level, quite simple and consistent. This makes it very easy to reason about how it should behave. Modules are the backbone of OCaml programming.

                                          EDIT: I forgot to mention another benefit that module interfaces buy you: build speed. The technique that OCaml usages to provide that build speed goes back to Modula-2, and I’ve written about it here: https://dev.to/yawaramin/ocaml-interface-files-hero-or-menace-2cib

                                          1. 5

                                            It allows for pretty generic libraries to be written. In OCaml that’s exemplified by libraries like:

                                            • ocamlgraph, a graph library functorized over edges, nodes, labels, etc. with a large collection of algorithms;
                                            • mirage OS, a library for writing unikernels composed of a collection of functors over concrete hardware, clock implementations, tcp implementations, etc.
                                            1. 3

                                              Before reading the article, I thought this was a funny bit of satire.

                                              After reading the article, I too am curious what parameterized modules buys you. Coming particularly from Rust, where modules are very much a first class part of the language, but not parameterizable, it’s not clear to me what that would buy me.

                                              1. 5

                                                If they are not parameterizable, how are they first class? :O

                                                OCaml Functors are just functions from structures to structures. Here is a more thorough explanation: https://www.cs.cornell.edu/courses/cs3110/2018sp/l/08-functors/notes.html

                                                What do functions buy you? Why can’t you apply whatever they buy you to modules?

                                                1. 1

                                                  Ahh, I see now. In Rust and Haskell, such parameterized structs, AKA Generics, are provided by algebraic data types, while the module system enables scoping and hiding. Looks like it’s primarily a different terminology for essentially the same thing.

                                                  1. 3

                                                    No, ADTs usually do not enable this. The Haskell analogue for this is called Backpack, and it was modelled after the OCaml system. There is a thesis documenting it.

                                              2. 1

                                                Functors (parameterized modules) are essential in OCaml to implement something like maps or sets over a data type where the implementation needs access to properties like order of the element type. Polymorphism permits to implement a data structure (like lists, pairs) that don’t depend on specific properties of the element type. When an implementation needs them there is no other mechanism than providing them as factor argument. Haskell does not have this problem because type classes can be used to provide access to order, equality, or a show function.

                                              1. 28

                                                1: I want to stay in the terminal

                                                2: When I edit code, I don’t want anything on my screen except the code.

                                                3: I don’t want to use a mouse.

                                                4: I want to be able to fully customize everything.

                                                5: The default Vim keybindings are already super awesome.

                                                I like a system to be composed of parts that do one thing well. When it comes to editing text, Vim does this better than any other software I tried. For everything else, I have other command line tools that are at my fingertips.

                                                Regarding keybindings: I don’t know, how most developers translate something like “Ok, I want to cut out this line and put it at the end of the file” into something their “IDE” understands. In Vim, you type ddGp and are done. It takes a blink of an eye. After a while, most text editing tasks just roll of your fingers without you thinking about it. It is a magic feeling of productivity I never had in any other editor.

                                                1. 8

                                                  Vim is my goto text editor, so this isn’t a hit job. But Vim is not good at composing third party tools to create an IDE like experience. Vim is painful to extend and script but a pleasure to drive.

                                                  1. 5

                                                    Vim is painful to extend and script but a pleasure to drive.

                                                    That’s part of what keeps the default experience at least moderately sane.

                                                  2. 3

                                                    Personally, I’ve always felt examples like “Ok, I want to cut out this line and put it at the end of the file” a bit contrived. I have never wanted to do that with my code. It would break my code. It sounds more like an easy vim golf exercise. Typically, at least for me who is at best a vim novice, navigating a project and global search and replace are things I’ve found vim to suffer on and among the main reasons I continue to use VS code.

                                                    1. 5

                                                      In my experience, it applies to all text editing task. Not only to contrived examples.

                                                      Moving lines (or larger parts of a file) around happens pretty often. And its not just easy to reach the end of the file. Say you want to move the current line below the line with “hello” in it. Then it becomes dd/hello[enter]p which means:

                                                      dd = cut the current line
                                                      /hello = search for hello
                                                      p = paste below
                                                      

                                                      And you quickly learn to use it like a language. Want to cut the current paragraph instead? “dap” = “cut the current paragraph”. Cut the current word? “daw”. Cut the current block? “daB”. etc etc. Want to copy instead of cut? Use “y” instead of “d”. Want to paste above instead below? Use “P” instead of “p”. With just a handful of these verbs, you start to communicate with the editor like you could have never imagined before.

                                                      1. 2

                                                        Cut the line, find “hello”, escape, move down, paste line.

                                                      2. 2

                                                        No particular example of Vim usage is a massive time saver. Sure, copying stuff to the end of the file can be done pretty fast without vim. Sure, you can change the code in parentheses pretty fast, too. And you can delete a couple of arguments from a function just fine in any editor. But with Vim, these actions are nearly instant:

                                                        • dGp
                                                        • ci(
                                                        • 2df,

                                                        The last two are examples of the actions that I personally commonly perform when editing code. The real benefits start appearing because you can do everything this fast. Most actions require only a couple of keystrokes. And the keystrokes are arranged in such a way that you can come up with actions appropriate to any situation. Because of this, you don’t even have to think about what you’re doing: you’re editing text at the speed of your own thought. There’s no “press the left button to expand selection until the comma”, there’s no “drag mouse to select line”. It’s just… done. And that’s what makes me reach for vim key bindings even in other editors.

                                                      3. 1

                                                        Cut (or yank) the line, move to bottom of file, paste (or unyank).

                                                        1. 1

                                                          I don’t know, how most developers translate something like “Ok, I want to cut out this line and put it at the end of the file” into something their “IDE” understands.

                                                          Home, Shift + End, Ctrl + X, Backspace, Ctrl + End, Enter, Ctrl + V (or press right arrow after the Shift + End while holding Shift, then you can skip the Enter as well). Not as efficient as vim, but it’s easier to learn in my opinion (I’m a vim user myself).

                                                          1. 1

                                                            You can remove Home, Shift + End and Backspace keys. Almost all editors default to copying/cutting the whole line if there’s no active selection.

                                                        1. 8

                                                          Regarding sum/product, I recently saw this: https://mail.haskell.org/pipermail/libraries/2020-October/030862.html

                                                          It seems like there’s a proposal to make sum and product strict. So perhaps some time in the future Haskell will have one less warts :)

                                                          1. 8

                                                            Yep’, and a merge request is in the works as well, so we’re pretty much done with that point. :)

                                                            1. 2

                                                              This seems to be some good improvements!

                                                              Let’s hope that other programming communities (like Rust) can use this as a learning opportunity!

                                                              1. 0

                                                                sum product and fold are already strict in Rust.

                                                                1. 1

                                                                  I meant fixing things in general.

                                                            1. 4

                                                              The biggest unsolved issue in programming language design is a social one:

                                                              How to stop “well-intentioned” people from suggesting/demanding/adding features until the language collapses under its own complexity.

                                                              For this reason, I believe it is unlikely that there will be any lasting progress in language design within my lifetime.

                                                              1. 2

                                                                Clojure does this well. Want something added? Make it a library. No adding operators to the core language or changing syntaxes.

                                                                This is related to https://erikbern.com/2016/12/05/the-half-life-of-code.html because clojure more looks like each year adds a thin layer of bedrock

                                                                1. 2

                                                                  Remove the social element and have a benevolent dictator? Jonathan Blow’s Jai language looks interesting and takes such an approach. (Now if only he’d release a publicly available beta)

                                                                  1. 1

                                                                    That may help for your personal project, but doesn’t stop the decay of languages (that many people depend on) out there.

                                                                    E. g. if the reaction to not wanting another feature is basically HOW DARE YOU, EXPLAIN YOURSELF then there is no chance of ever getting language growth under control.

                                                                    1. 1

                                                                      Elm’s Evan (I think that’s his name) is another language dictator. I think Elm is quite a nice language, and is extremely simple. A lot of thought goes into each new feature of the language. However, people have voiced concerns with Evan’s leadership; I don’t quite know enough about that.

                                                                  1. 4

                                                                    What about adding a “self” tag? Enforce one self’s content to be tagged if submitted, else ban/remove/etc.

                                                                    1. 9

                                                                      There is a “self” option in the submission page. Maybe the solution would be to display that somewhere in the ui so that it’s more obvious. As to enforcement, I think @pushcx has been really good at enforcing blatant and obvious self-promotion, and I think he catches on quickly to members of the community who are here solely for self promotion

                                                                      1. 14

                                                                        It’s already very obvious: “authored” vs “via”

                                                                        1. 5

                                                                          interesting and extremely non-visible. Some sort of UX fail. It should appear like a flag, then it’d have great visibility and allow filtering, to top it off.

                                                                          1. 11

                                                                            I thought it was quite obvious, especially with the colors (blue for author, green for fresh account, and black otherwise).

                                                                            1. 4

                                                                              I have to be honest, I didn’t even know what the colors were for (thought it was green for admins for some reason, I guess Reddit influence) before reading your comment.

                                                                              1. 1

                                                                                I totally missed it until it’s been pointed out.

                                                                                1. 1

                                                                                  I had no idea that the colors meant anything. Is there a doc somewhere that explains all of these “obvious” UI indicators?

                                                                                  1. 1

                                                                                    You kind of grok it from context.

                                                                                    A green username’s profile will say “new user” or similar.

                                                                                    A blue username is subtly echoed in the “Authored by” text at the top of a comment page.

                                                                                    But yes, maybe this should be explicitly mentioned in the About page.

                                                                              2. 1

                                                                                I’m not sure it’s that obvious. I’ve been on lobsters for a while now and this is the first time I’m aware that there is a difference between “authored” and “via” and there is a distinct meaning to each of those.

                                                                            2. 2

                                                                              Maybe a good compromise.

                                                                              1. 1

                                                                                Does it even cause any visible difference?

                                                                                I don’t think it does, or rather, I’ve never noticed it if so.

                                                                                It would be a start to make this visible. Something to consider before taking the next step suggested by the OP.

                                                                                1. 5

                                                                                  9 of the 25 entries in /newest are “authored by” as opposed to “via”.

                                                                                  It’s definitely something I note when looking at a submission.

                                                                                  1. 3

                                                                                    You can filter out tags, if you don’t want “authored by” posts, filter it out.

                                                                                    I don’t use filters myself, and I wouldn’t use this one as I think some of the coolest posts are authored by this community itself, but it gives OP and others the choice to stop getting that.

                                                                                    1. 4

                                                                                      I don’t think it’s possible to filter on the state “authored by/via”.

                                                                                      I’m tentatively positive to supporting new functionality to do so, but it would have to be created as a pull request as it’s a new feature.

                                                                                      1. 1

                                                                                        That’s why I suggested a tag, even if it uses the “I authored this post” checkbox data.

                                                                                        1. 1

                                                                                          Ah ok, I missed that context!

                                                                                      2. 3

                                                                                        IMO filtering by “self” would be an anti-feature. Stories should be judged on their own merit, not by who submitted it.

                                                                                        1. 1

                                                                                          Fair enough, the thing is, flags are opt-in. It’s your choice to filter out self-posting.

                                                                                          Edit: I actually agree, in fact, I’ve posted my own content here many times in the past, and I wouldn’t filter a “self” tag, just thinking of an easy solution for OPs problem, which I also understand.

                                                                                  1. 1

                                                                                    In my opinion, the “sparks joy” thing is taken too far. We have “sparks joy” in the intro, we have “sparks joy” for object allocations, we have objects that “spark joy” because they’re “necessary”. It feels excessive, and almost made me stop reading.

                                                                                    However, the rest of the article is great! Those were some nice examples with actual, measurable performance benefits. I appreciated the mention of statistical significance, too.

                                                                                    I don’t know much about ruby, but in the first example, you use ||= for creating the cache hash. Doesn’t this mean that there’s an “if truthy” comparison each time a column key is looked up? Can’t this cache hash be created at object initialization time? Would doing so make any measurable difference?

                                                                                    Thanks for sharing!

                                                                                    1. 3

                                                                                      Thanks for reading!

                                                                                      The “sparks joy” gimmick makes more sense in the talk format as i’m trying to get people’s attention who have been in a seat and listening to people talk for like 8 hours. Gotta have repetition and some hooks so people stay interested. Also it was written and given in 2019 a the height of the “sparks joy” meme cycle. It’s supposed to feel “in the moment” and obviously that moment has passed.

                                                                                      However, the rest of the article is great! Those were some nice examples with actual, measurable performance benefits.

                                                                                      Thanks!

                                                                                      I appreciated the mention of statistical significance, too.

                                                                                      This is something I’ve grown to care more about. Especially as I do more perf work in open source. Answering the question “how much faster is this REALLY” is a tough question that lead me to learn a lot more about statistics.

                                                                                      ||= for creating the cache hash. Doesn’t this mean that there’s an “if truthy” comparison each time a column key is looked up?

                                                                                      This is used for memoizing the value. It’s common to do something like this:

                                                                                      def memoize_me
                                                                                        @value ||= begin
                                                                                          print "Value being generated now "
                                                                                          "hi there"
                                                                                        end
                                                                                      end
                                                                                      
                                                                                      puts memoize_me # => "Value being generated now hi there"
                                                                                      puts memoize_me # => "hi there"
                                                                                      

                                                                                      Essentially we don’t want to generate a new hash on each call. The hash is built lazilly so if the code never calls respond_to then it’s never allocated.

                                                                                      Can’t this cache hash be created at object initialization time?

                                                                                      Totally could be. It’s a matter of taste and tradeoffs.

                                                                                      1. 1

                                                                                        Yes, the gimmick makes much more sense in the context of a talk!

                                                                                        This is used for memoizing the value. It’s common to do something like this

                                                                                        Right! My question was, doesn’t ||= effectively translate to x = x || y, and won’t || check if the first argument is truthy / falsely (effectively checking if the hash is nil)? If so, I was wondering if that check on every call of the function would make any noticable impact.

                                                                                        Thanks for your response :)

                                                                                        1. 2

                                                                                          If so, I was wondering if that check on every call of the function would make any noticable impact.

                                                                                          Yep it will have an impact:

                                                                                          require 'benchmark/ips'
                                                                                          
                                                                                          class Thing
                                                                                            def initialize
                                                                                              @no_memoize_hash_lookup = {}
                                                                                            end
                                                                                          
                                                                                            def memoize_hash_lookup(name_symbol)
                                                                                              @memoize_hash_lookup ||= {}
                                                                                              @memoize_hash_lookup[name_symbol]
                                                                                            end
                                                                                          
                                                                                            def no_memoize_hash_lookup(name_symbol)
                                                                                              @no_memoize_hash_lookup[name_symbol]
                                                                                            end
                                                                                          end
                                                                                          
                                                                                          thing = Thing.new
                                                                                          
                                                                                          Benchmark.ips do |x|
                                                                                            x.report("memoize   ") { thing.memoize_hash_lookup(:lol) }
                                                                                            x.report("no memoize") { thing.no_memoize_hash_lookup(:lol) }
                                                                                            x.compare!
                                                                                          end
                                                                                          
                                                                                          # Warming up --------------------------------------
                                                                                          #           memoize      884.894k i/100ms
                                                                                          #           no memoize     1.344M i/100ms
                                                                                          # Calculating -------------------------------------
                                                                                          #           memoize         9.376M (± 3.0%) i/s -     46.899M in   5.006905s
                                                                                          #           no memoize     12.801M (±12.5%) i/s -     63.156M in   5.054110s
                                                                                          
                                                                                          # Comparison:
                                                                                          #           no memoize: 12801103.5 i/s
                                                                                          #           memoize   :  9375917.6 i/s - 1.37x  (± 0.00) slower
                                                                                          

                                                                                          I don’t know if this microbenchmark would translate to full request/response measurable performance, but the next step would be to make a commit and try it out on CodeTriage with derailed_benchmarks. As you can see even the “slow” version is pretty fast it would have to be called roughly 9,375 times in a request to take up 1ms of request time.

                                                                                          Also, there’s an even faster way to do this performance optimization in Ruby 3.0 once it comes out. Now you can get a frozen string from a symbol directly using Symbol#name https://blog.saeloun.com/2020/09/09/ruby-adds-name-method-to-symbol.html

                                                                                    1. 18

                                                                                      I think it’s worth pointing out that Wren is made by the creator of crafting interpreters. Perhaps that’s a little bit of an argument to authority, but the author clearly knows a lot about creating languages.

                                                                                      1. 4

                                                                                        I’ve followed the second part (bytecode VM implementation) of this book in C++ and I can say that it’s been one of my favorite project-style book walkthroughs in a while so far! Good combination of learning low level things and also learning parsing stuff at the same time, in a very pragmatic way. I think the information is also very relevant to current-day systems, since they are often usually running such a VM somewhere or the other.

                                                                                        edit (and maybe tangential): Godbolt link for what I’ve done so far: https://godbolt.org/z/5GbhnK I followed till control structures then did the nan-tagging bit and added ‘indirect threaded’ dispatch, it actually comes out to be competitive for arithmetic-loop benchmarks with other real (JIT-off) VMs (included times for benchmarks vs. Lua{,JIT}, v8, …). I’ve been interested in building out a VM that doesn’t (or rarely/differently) does GC (eg. if it’s just for a per-frame rule in an ECS-y game, it does some logic and calls system functions to save out data–so mgmt of layout and lifetimes of longer-term data is in the ECS in the native lang, or described elsehow (scene editor tool / ….)) and has easy interop with C struct type memory. SPIR-V is an interesting format in this space too (it’s cool that you literally define types upfront with eg. OpTypeVoid and OpTypeStruct then can do eg. OpAccessChain…).

                                                                                        1. 3

                                                                                          Nystrom is also one of the main guys behind Dart (and a good twitter follow to boot).

                                                                                          1. 24

                                                                                            I wouldn’t describe myself as one of the “main guys”. I was not one of the original language designers and only joined the language team as a full member much later. I’m just higher profile because I wrote more stuff down and use Reddit and Twitter more. :)

                                                                                            1. 1

                                                                                              Is there a chance of the printed book being under a Christmas tree in 2020? ;-)

                                                                                              1. 2

                                                                                                I would love that to be the case, but it seems unlikely. I’m making steady progress on the last editing pass and the typesetting, but it’s a big project and 2020 is not exactly playing nice with anyone’s goals.

                                                                                          2. 2

                                                                                            I loved the second part, and it enabled me to write a small VM + “compiler” in rust for a proof language I’m working on, in a few thousands lines (although it uses fixed size instructions, à la lua, rather than pure stack instructions). I found the book very interesting and well written, kudos u/munificent !

                                                                                          1. 13

                                                                                            I’m trying to email more creators whose content that I adore. I consume a lot of blogs and video content but don’t reach out that often.

                                                                                            For me, an email from someone who reads my blog means so much more than an integer going up by 1 in an analytics panel. An encouraging message gives me the stamina to write five more posts!

                                                                                            1. 5

                                                                                              I don’t have analytics at all on my personal site, wiki, blog. It’s pointless waste of time to look at those numbers going up/down imo. What matters are the interactions you get from the content you produce or sales made. But never viewership analytics I found.

                                                                                              It’s also addicting attaching any kind of metrics to anything so whatever metrics you do add, should be actionable. But with viewership metrics, the kind of signals you get are rarely ever useful.

                                                                                              Actually ignore all the above, read through your post on building privacy focused analytics and it’s great. I forgot about the Where users are referred from part which is indeed super useful. I decided to reply to you as analytics is something I always struggled with in the past as I found it to waste me more time ‘checking’ them than actually deriving value from them.

                                                                                              1. 2

                                                                                                with viewership metrics, the kind of signals you get are rarely ever useful

                                                                                                I agree with this. Getting people to click an article does not mean that the clickers found it to be helpful content.

                                                                                                Time spent on a page can help see which pages people linger on but again it doesn’t confirm that the time spent resulted in value gained for the reader.

                                                                                                What matters are the interactions you get from the content you produce or sales made

                                                                                                Yes!

                                                                                                Where users are referred from

                                                                                                As you say, I have found this helpful because I can reply to/contribute to communities who enjoyed a post.

                                                                                                I found it to waste me more time ‘checking’ them than actually deriving value from them

                                                                                                I’ve had to work on this too!

                                                                                              2. 1

                                                                                                I totally agree. I’ve long since turned off analytics on my site, and as far as I know, nobody reads the content that I post. This impression is occasionally broken by somebody emailing me, and it’s great!

                                                                                                I just wish there was more feedback on what people write. Sometimes I wonder if my content has obvious flaws that I’m blind to. Nobody has ever e-mailed me a correction, but I know better than to assume that means there is nothing wrong.

                                                                                              1. 2

                                                                                                How is this kind of post is permitted but @hblanks post about experience with job interviews is not?

                                                                                                1. 4

                                                                                                  Bumping up against this aspect of the community can be frustrating. I have certainly found that whenever I post anything about WSL :)

                                                                                                  However in this case, I can see where people are coming from, because while looking at the question alone might make you think “How does this belong on lobsters?” if you look at the answer there’s always some incredible technical content in there, as well as opportunities for people to collaborate and learn from each other that aren’t possible in any other type of post.

                                                                                                  1. 1

                                                                                                    So the only value here is the potential for technical content in the comments, but there’s a belief that there would be no value in the comments on a post about a job interview experience. Seems like some weird gatekeeping.

                                                                                                    1. 5

                                                                                                      I can’t decide whether you’re just bitter about your friend’s post getting negative feedback, whether you hate the “What are you doing?” posts, or whether you’re just generally voicing displeasure about the community.

                                                                                                      Either way, please recognize that there is no one “gatekeeper” on Lobsters. It’s a community, with all kinds of disparate desires and beliefs about what is relevant, important, etc.

                                                                                                      Speaking from experience here, I would suggest that you or your friend consider taking the negative feedback on that particular article in stride and focusing on contributing constructively in other ways. Easier said than done sometimes I know.

                                                                                                      1. 4

                                                                                                        Zoom in close enough on any line and you will find the edges to always be fuzzy. We’re only human, after all.

                                                                                                    2. 4

                                                                                                      This kind of post is a rare special case that helps build the community. It’s effectively a way to contain off-topic discussion, without making lobste.rs completely sterile. Part of why I like this website is that I start to recognize the people that visit.

                                                                                                      Also, consider that the post by the user you mention was (from what I understand) an entire thread about their personal experience. This thread is a discussion thread, where people’s individual contributions are comments. So it’s smaller in scale in that sense.

                                                                                                    1. 2

                                                                                                      So we have 11 bits of precision, which is log(2**11) digits of precision, which is roughly 3? What kind of computations need only 3 digits of precision? I’m genuinely curious, since it must have a use (otherwise it wouldn’t be added in).

                                                                                                      1. 7

                                                                                                        Deep learning models have been shown to not lose significant performance by using bfloat16/Half. Furthermore, you can build models that are twice as large, sometimes giving a performance increase.

                                                                                                        1. 4

                                                                                                          IIRC half float has seen a bunch of use in graphics for HDR rendering, where the extra range is very useful. 11 bits is fine for that since 10 bit per channel monitors are rare.

                                                                                                          1. 1

                                                                                                            Why not just use a 16-bit uint and then scale it down to the device depth at the end? (In other words, fixed-point with a range 0-1.) That gives you around 200x the color resolution of a display, which seems like more than enough, without the overhead of dealing with FP.

                                                                                                            1. 4

                                                                                                              The goal when doing HDR rendering is to be able to represent sunlight (~100k lux) and candlelight (~10 lux) and maybe even moonlight (~0.1 lux or less) in a single image.

                                                                                                              Half float gives you 5 bits of exponent so the smallest and largest representable finite noon zero numbers are about a factor of 2 billion apart. 16 bit linear gives you only a factor of 65535 between the smallest and largest representable non zero numbers.

                                                                                                              HDR doesn’t demand more precision, it demands more range.

                                                                                                              After rendering the scene to a HDR buffer, post processing steps (bloom, exposure control) will be used later to transform the high dynamic range image into something convincing-feeling that fits on your monitor’s narrow dynamic range.

                                                                                                          2. 3

                                                                                                            My encounters with f16 types has mainly been neural net stuff, where using twice as many f16’s often gets one better performance than the same number of bytes of f32‘s. Something something more degrees of freedom at lower resolution mumble mumble. 11 bits gets you basically 0.05% precision, so the overall precision isn’t that bad, errors just add up quickly.

                                                                                                          1. 3

                                                                                                            While designers recommend not using system or free fonts, I’ve seen a lot of web developers push for using system fonts. The time it takes to load an external (to the user’s computer) font leads to either a flash of unstyled text (the system font is shown until the real font finishes loading) or a flash of white text (the font doesn’t show up until it finishes loading). Both of these are jarring, and can’t be eliminated completely.

                                                                                                            1. 1

                                                                                                              It’s also usually a one-time thing, FWIW. Most sites do it badly, though.

                                                                                                              1. 1

                                                                                                                True. It’s funny, that’s the NoScript experience, by default, to see system fonts everywhere. I’ll be left with sans-serif when their CSS assumes the webfont (named first) loaded ok. It’s especially funny on a font’s website, as they try to demo & sell their font in .. sans-serif, like trying to sell me a better TV while I’m watching my old TV.

                                                                                                                Lobsters has a good font-family, I’d say.

                                                                                                              1. 0

                                                                                                                Naming a language “Video” sure makes for difficult Googling…