1. 2

    In my opinion, you use “code editor” to equivocate between two different tools. One is for programmers, and it facilitates programming. We usually call these IDEs. The other tool is for product designers, and facilitates product development without programming. This is closer to the prototyping or WYSIWYG tools of today.

    I think this equivocation is a disadvantage when it comes to answering some of your initial questions:

    What metric can we even use to measure the perfect code editor? How will we know if and when we have it? Are we close to reaching that point?

    You can’t hold IntelliJ and Figma to the same standard, nor VSCode and Squarespace.

    This is why the problem statement appears contradictory: two different solutions to two different problems are compared as if they were of the same kind.

    1. 3

      I haven’t personally experienced it yet, but I think that this style of focus is ultimate needed for a successful project. In my opinion, it’s better to excellently serve a few people’s needs than mediocrely serve many people’s needs. Also, trying to design up front for so many hypothetical use-cases is a good way to not finish the project.

      1. 3

        The description of a hypermedia API is how I imagine language servers should work. Are there any experienced LSP devs who’d like to weigh in on that?

        1. 4

          Wow, I’m it’s cool that something like https://developers.google.com/tech-writing exists. I have a sense of what I consider to be “good technical writing”, and I’m excited by the possibility that other people have given words to that sense. Does anyone know of other resources about technical writing?

            1. 2

              Thank you :)

          1. 2

            What I really want is something like Python (or like Haskell, depending on the circumstances) where running shell commands is a “first-class” language construct, with ergonomic piping of stdout/stdin/files. Oil seems much-improved compared to Bash, but it (intentionally, for good reason) borrows many of the same bad design choices with regard to control flow and cryptic naming schemes.

            The best solution I’ve found so far is to call subprocess.run from a Python script and use string interpolation to pass arguments to the commands I want to run. But this is far from ideal.

            1. 8

              Why not Perl? It’s the obvious choice for a language where shell commands are a “first-class” language construct.

              1. 3

                Totes agreed!

                I find shell logic and data structures impenetrable and error prone, so I write all my “shell scripts” in Perl. It’s easy to pull shell output into Perl data structures using backticks, e.g. print the names of files in $TMPDIR iff they are directories

                chomp(my @out = `ls $ENV{TMPDIR}`); for (@out) { print "$_\n" if -d qq[$ENV{TMPDIR}/$_]; }
                

                For non-perl users, the backticks above mean “run this shell command”, chomp removes the newlines, qq[] means “wrap this in double quotes and handle the annoying escaping for me”, and $ENV{X} is how you access env var X, etc.

                The book Minimal Perl for UNIX and Linux People has a lot of information about how perl can be used to do everything on unices that awk, grep, sed, etc. can do

                also with Perl you get a real debugger (see also perldoc perldebtut or the book Pro Perl Debugging)

                1. 3

                  I think it easily gets overlooked these days, since it is no longer hip and therefore less on the frontpage news

                  1. 5

                    Apropos of “no longer hip”, tcl is a nice scripting language with a clean syntax.

                    1. 1

                      I used to mess with tcl scripts all the time in the early 2000s when I ran eggdrop irc bots. Good old days…

                      1. 4

                        Sqlite started as a tcl extension and still has a good tcl interface: https://sqlite.org/tclsqlite.html

                        1. -2

                          TIL!

                2. 3

                  Depending on your use cases, have you ever tried Fish, or more on the programming side, Raku/Perl? Raku is awesome for shell-style scripting.

                  1. 2

                    Hm what do you mean by control flow? I’m working on more docs so it probably isn’t entirely clear, but control flow looks like this

                    if test --dir $x {
                      echo "$x is a dir"
                    }
                    

                    And you can use expressions with ()

                    if (x > 0) {
                      echo "$x is positive"
                    }
                    

                    For loops are like bash with curly braces

                    for x in a b c { echo $x }  # usually written on 3 lines, being compact here
                    for x in *.py { echo $x }
                    for x in @(seq 3) { echo $x }  # explicit splitting rather than $(seq 3)
                    

                    I’m also interested in feedback on naming. Oil has long flags like test --dir instead of test -d. I am trying to get rid of all the one letter obscureness (or at least ALLOW you to avoid it if desired).


                    It’s accurate to describe Oil as “Python where running shell commands is first class” (and pipelines and more).

                    Although I would take care to call it perhaps the “David Beazley” subset of Python :) That is a prominent Python author who has been lamenting the complexity of Python.

                    He writes some great code with just numbers, strings, lists, dicts, and functions. Oil is like Python in that respect; it’s not like Python in the sense that it has a meta-object protocol (__iter__ and __enter__ and __zero__ and all that).

                    I’m interested in language feedback and if that matches your perception of Oil :) Does the code look cryptic?

                    1. 3

                      I’ve only had a cursory look at Oil, so forgive me if I have the wrong idea :). I really admire the effort, and I can really relate to what you’ve written about your motivations for creating Oil!

                      Oil is also aimed at people who know say Python or JavaScript, but purposely avoid shell

                      It seems like Oil will be great for people who spend a lot of time maintaining shell scripts and want to work with something a little more sane. However, the Oil docs impression that Oil will be just as confusing as Bash for someone like me who wants to spend as little time as possible writing / reading / maintaining shell scripts and who only finds occasion to do so once every few months. For someone like me, I wish there were some syntax-level construct in Python or Haskell for running shell commands that lets me take advantage of my existing knowledge of the language, rather than having to learn something entirely new.

                      I took another look at the Oil docs, and the control flow actually seems ok, even if minimal. It would be cool to see some more tools to map / flatmap / fold commands, especially when I have a bunch of parameters that are used to generate a sequence of commands.

                      I think the biggest improvement a new shell-scripting language can offer is an alternative syntax for specifying command-line flags. Now that I think of it, this is actually the main reason I wrap all my CLI calls in Python scripts. For instance, here are some commands I had to run at work last week:

                      docker run -itd --name container_name --rm -e CUDA_VISIBLE_DEVICES=0 -e API_KEY="..." -v /home/ben:/ben/ -v /data/v2/:/data image_name
                      
                      docker exec -d container_name sh -c 'mkdir -p /ben/train/train_2021-05-16_144614 && python /ben/project_name/train.py \
                                      --batchSize 16 --loadSize 160 \
                                      --alpha 1.0 \
                                      --annotations /data/project_name/sourceA/v3/really_long_name.txt \
                                      --annotations /data/project_name/sourceB/v1/really_long_name.txt \
                                      --annotations /data/project_name/sourceC/v4/really_long_name.txt \
                                      --data-path /data/project_name/images \
                                      --output-path /ben/train/train_2021-05-16_144614/checkpoints \
                                      --gpu_ids 2 \
                                      2>&1 > /ben/train/train_2021-05-16_144614/log_2021-05-16_144614_outer.txt'
                      

                      I actually had to run the exec command four different times with different arguments. I wrote a Python script to generate the Python command, wrap it in the docker command, and pass in all the necessary parameters with string interpolation. But to me this seems like a pretty silly to glue together different programs, when we already have familiar syntax for function calls! I would love to write instead:

                      var env_variables = [
                        CUDA_VISIBLE_DEVICES=0,
                        API_KEY="..."
                      ];
                      
                      var volumes = [
                        /home/ben/:/ben/,
                        /data/commondata/v2, /data
                      ];
                      
                      docker.run(
                        flags="interactive, tty, remove",
                        env=env_variables,
                        name="container_name",
                        image="image_name",
                        volume=volumes
                      );
                      
                      ...
                      
                      image_annotations=[ /data/project_name/sourceA/v3/really_long_name.txt, ... ]
                      
                      python_command=python(train.py,
                        batchSize=16, loadSize=160, alpha=1.0,
                        annotations=image_annotations,
                        data-path=/data/project_name/images,
                        output_path=...
                        gpu_ids=2,
                      )
                      
                      docker.exec(
                        detached=True,
                        container=container_name,
                        command=python_command
                      )
                      

                      This should work, automatically, for ANY command line interface, not just docker. It should work without writing any custom wrapper code. Without actually running the command, I want some kind of TypeScript-style static type-checking to know if the flag/argument I provided does not match what the CLI expects.

                      1. 1

                        Thanks this is useful feedback!

                        Yes I understand that Oil feels like it’s for the “shell expert” now. Because I do frame many things in terms of an upgrade path from bash, and that requires knowing bash (lots of people don’t!).

                        But it’s absolutely the goal for a Python/JS programmer to be able to pick up Oil with minimal shell knowledge. You have to understand argv, env, stdin/stdout/stderr, and exit code, and that’s it hopefully. Maybe file descriptors for the advanced.

                        I have an upcoming “Tour of Oil” doc that should help with this. It will document Oil without the legacy stuff.

                        It would be cool to see some more tools to map / flatmap / fold commands, especially when I have a bunch of parameters that are used to generate a sequence of commands.

                        Yes, great feedback, there is a pattern for this but it’s probably not apparent to newcomers. In shell map / flatmap are generally done by filtering streams of lines, and I think that pattern will help.

                        https://github.com/oilshell/oil/issues/943

                        There is also splicing of arrays, which are natural for command line args

                        var common = %( --detached=1 --interactive )
                        var args1 = %( --annotations /data/1 )
                        var args2 = %( --annotations /data/2 )
                        
                        # Run 2 variants of the command
                        docker run @common @args1
                        docker run @common @args2
                        
                        # Run 4 variants
                        for i in @(seq 4) {
                          docker run @common --index=$i
                        }
                        

                        Does that make sense as applied to your problem? You can also apply to env vars with “env”:

                        var env_args = %( PYTHONPATH=. FOO=bar )
                        env @env_args python -c 'print('hi')
                        

                        There is no static checking, but Oil’s philosophy is to give errors earlier in general. It will be possible to write your own argv validators if you like.

                        A useful pattern will be to define “subsets” of Docker as “procs”. As in, you can just accept the flags you use and validate them:

                        proc mydocker {
                           argparse @ARGV %OPT {   # not implemented yet
                              -v --verbose "Verbose flag"
                           }
                           docker @myarray
                        }
                        mydocker --oops  # syntax error
                        

                        Thanks for the feedback, let me know if that makes sense, and if you have other questions. And feel free to follow the issue on Github!

                    2. 1

                      Golang exec’s module is much better for piping in the classic shell way. You might consider that.

                      1. 1

                        The best solution I’ve found so far is to call subprocess.run from a Python script and use string interpolation to pass arguments to the commands I want to run. But this is far from ideal.

                        I do a similar thing. What sort of improvements can you envision?

                      1. 18

                        I’m increasingly coming around to the conclusion that there’s no such thing as a functional programming language; only programs that are written in a more or less functional style. A language can offer features that encourage or discourage the writing of functional programs, and when people say “language X is a functional language” what they mean is that it encourages the writing of functional programs.

                        That said, any language with statements and no repl is difficult for me to describe as functional.

                        1. 7

                          I’ve joked before, but it’s somewhat true, that there’s really a hierarchy of “functional” which depends entirely on which other languages you sneer at for being insufficiently “functional” and which you sneer at for having uselessly gone too far in pursuit of purity.

                          Like, the base level is you join the Church, you renounce von Neumann and all his works. There’s a level where you sneer at any poor pleb whose language doesn’t guarantee tail-call optimization. There’s a level where you sneer at any poor pleb whose language is based on untyped lambda calculus. There’s a level where you sneer at any poor pleb whose language doesn’t support fully monoiconic dyads bound over the closed field of a homomorphic Bourbaki trinoid. And at each level you also sneer at the people “above” you for going overboard and forgetting that programming needs to be practical, too.

                          1. 4

                            This is how I teach paradigms at the start of my Rust course! Programming paradigms are about a mental model for computation (“do I think of this in terms of functions / objects / logical relations / blocks / etc.”), and language support for a paradigm is about how well you can translate that mental model into code in that language. People sometimes don’t like this definition because it’s subjective, but it avoids the problems with defining paradigms extensionally or intensionally.

                            If you define them extensionally, you’ll find that no one can agree on what the extent is. “Hey you left X language out of the functional list!” “Yes it isn’t functional” “But it is!” and so on.

                            If you define them intensionally, you’ll find that no one can agree on what features constitute the necessary and sufficient conditions for inclusion in the set. “Functional programming requires automatic currying!” “No it doesn’t, but it does require partial function application!” “You’re both wrong, but it does require a powerful type system!” “What does ‘powerful’ mean?” and so on.

                            So instead you say “well when I think of my program in terms of functions, I find that I can write the code that matches my mental model really easily in X language, so I say it’s functional for me!”

                            Honestly, part of why I like this is that I think it helps us get away from an endless and unsolvable definitional fight and into the more interesting questions of how features intersect to increase of decrease comfort with common mental models.

                            1. 2

                              Honestly, part of why I like this is that I think it helps us get away from an endless and unsolvable definitional fight and into the more interesting questions of how features intersect to increase of decrease comfort with common mental models.

                              I love how the other comments in this thread back this up. People are arguing over “no, a functional language must have X” / “no, it means Y” and they’re never going to agree with each other. Just acknowledge that fact and move on!

                            2. 3

                              This is absolutely true, IMO. And the same can be said with OOP.

                              A “functional language” would really be any language that strongly encourages a functional style or truly forbids an OOP style. I think Haskell and Elixir are pretty close to forbidding OOP programs.

                              Likewise, an OOP language is one that strongly encourages an object oriented style. JavaScript is OO because even functions are objects and you can add behaviors and properties to any object.

                              Etc, etc.

                              But I’m a bit confused by your comment about statements. Rust is pretty expression oriented. if is an expression, match is an expression, loops are expressions that return the value of its final iteration, all functions return the final expression in its body as the return value, etc.

                              1. 2

                                I think Haskell and Elixir are pretty close to forbidding OOP programs.

                                In what way do you see that? I guess it depends what you mean by “OOP” of course but Haskell has several powerful ways to do OOP-style message passing and encapsulation.

                                1. 1

                                  I’m sorry for the confusion. Everyone means something different when they say “OOP” (and “FP”). When I said OOP, I meant a style that revolves around “objects”. In my mind an object is something that has hidden, mutable, state. A black box, if you will. You may call a method on an object (or send it message), but you are not necessarily guaranteed the same response every time (think sending an HTTP request, a RNG, even a mutable Map/Dictionary is an object per my defiinition).

                                  I’ve never used Haskell for serious work, so I could’ve been totally off base there. And, actually, I guess Elixir does have message passing between processes. I was only thinking of the code inside a process… So, I’m probably wrong on both counts!

                                  1. 1

                                    Just as an example, here’s one way to do mutable student message passing style in Haskell:

                                    https://paste.sr.ht/~singpolyma/c618d894e7493d7197ef745035a8691d53e2a193

                                    (This is an example and a real system would have to handle a few cases this does not.) In this case it’s a monitor (threaded and threadsafe) and you can’t do inheritance (but you can do composition).

                                2. 1

                                  A “functional language” would really be any language that strongly encourages a functional style or truly forbids an OOP style.

                                  Gonna have to disagree here; whether something encourages object oriented programs or functional programs should be thought of as two orthogonal concerns that simply happen to be correlated in most widely-used languages. That said, “OOP” is such a poorly-defined term that I’m not even sure it’s worth spending any effort untangling this; IMO the term should be completely abandoned and more specific terms should be used in its place, like “message-passing”, “inheritance”, “encapsulation”, etc. (For instance, Elixir has great support for message passing and encapsulation, two cornerstones of what is often called “OOP”. No inheritance, but that’s great because inheritance is a mistake.)

                                  But I’m a bit confused by your comment about statements.

                                  I looked into it and … it IS confusing! Rust says that they “have statements” but what they really have is expressions that return Unit. Calling that a statement is pretty misleading IMO, because every other language I know that “has statements” means something completely different by it.

                                  1. 2

                                    That said, “OOP” is such a poorly-defined term that I’m not even sure it’s worth spending any effort untangling this; IMO the term should be completely abandoned and more specific terms should be used in its place, like “message-passing”, “inheritance”, “encapsulation”, etc. (For instance, Elixir has great support for message passing and encapsulation, two cornerstones of what is often called “OOP”. No inheritance, but that’s great because inheritance is a mistake.)

                                    Yeah, it’s definitely poorly defined. And my examples of Haskell and Elixir were actually bad examples. When I think of OOP, I’m thinking about black boxes of (potentially) mutable state and “message passing” (which may just be calling methods). You can’t, in theory, expect to get the same “response” if you send multiple messages to an object.

                                    As you said, Elixir is a good example of both FP and OOP. Kind of OOP in the large, FP in the small.

                                    Apologies for the confusion.

                                    I looked into it and … it IS confusing! Rust says that they “have statements” but what they really have is expressions that return Unit. Calling that a statement is pretty misleading IMO, because every other language I know that “has statements” means something completely different by it.

                                    Yeah, that’s strange that Rust docs would say they have statements… Maybe assignment is a statement? I don’t know. Most stuff in Rust is expressions, though- even the for loop that always returns Unit/void/whatever. It’s a neat language. Definitely not (pure)-function-oriented, IMO, but really fun, ergonomic, and safe for a systems language.

                                    I think the Rust docs also used to say that it’s not OOP. I think that’s wrong, too. It doesn’t have struct/class inheritance, but I think that you can get really far with an OOP style in Rust- struct fields are private by default; mutation is controlled via the borrow checker; traits are like type classes and if you write “object-safe” traits, you can pass trait objects around.

                                3. 3

                                  For the repl: Give https://github.com/google/evcxr/blob/master/evcxr_repl a chance. I just became aware of that project recently (the fact that they have a standalone repl) and have not yet tried to push it. It’s certainly appears not that full featured compared to dynamic languages or ghci but it should be good enough to make an inferior lisp mode out of it.

                                  1. 11

                                    Thanks but if the language doesn’t have a repl built-in it’s a good sign that it’s creators don’t value any of the same things I value in a language, so I don’t think rust would be a good fit for me.

                                    1. 3

                                      Never change, Technomancy ❤️

                                  2. 1

                                    I’m increasingly coming around to the conclusion there’s no such thing as a functional programming language

                                    The way I see it, a functional programming language is a one that maintains referential transparency.

                                    Lambda calculus, Haskell, Elm, Purescript, Futhark, Agda, Coq, and Idris are some examples.

                                    Then, languages which don’t enforce referential transparency fall on the scale of “more” or “less” functional, based on how easy it is to write pure code, and how frequently it is done in practice.

                                  1. 1

                                    Nice! I notice that the “moving” noise disappears a bit when I move my eyes across the screen.

                                    1. 2

                                      We carefully monitor startup performance […] this test can never get any slower.

                                      Is it literally as simple as assert(lastTime >= thisTime)? If the time randomly varies a little, how do you avoid spurious test failures? If you add some fudge factor like assert(lastTime + extraTime >= thisTime), do you end up allowing many small regressions?

                                      1. 3

                                        It’s not that simple; some of the way it works is documented at https://chromium.googlesource.com/chromium/src/+/master/docs/speed/addressing_performance_regressions.md , but the data support for this is gated to Google employees.

                                        1. 2

                                          I’d guess that to avoid this they would run the benchmarks many times and define an acceptable average+stddev, or some other statistical measure.

                                          1. 1

                                            We have this fun problem that one of the speed tests is so tight that if the CI is completely overloaded it fails.

                                            But it’s rare enough that it’s a good sign without any real false positives.

                                            1. 1

                                              I mean, that’s what we do at work. We make sure every release is faster than the previous, with very few exceptions. Then again, the whole point of our team is performance…

                                            1. 1

                                              Until somebody invents a time machine it will never be a solved problem.

                                              1. 16

                                                The paper obviously isn’t using “solved” to mean “will predict correctly 100% of the time”. The paper points out that even given the constraint that we don’t have a time machine, there are things which could be done to improve accuracy, and lays out some possible new approaches to research.

                                                You presumably know this though, which leaves me wondering why you left the comment you did.

                                                1. 2

                                                  I guess people like being bitter for no real reason.

                                                  Paper looks interesting.

                                                  1. 1

                                                    How do you know they were being bitter?

                                                    And, if they were being bitter, how do you know they didn’t have a reason?

                                                    And, if they were being bitter for no reason, how do you know they liked it?

                                                    1. 0

                                                      You sound pretty bitter; do you like it?

                                                      1. 1

                                                        Thanks for checking in :) ‘Bitter’ definitely doesn’t fit where I was at when I wrote that comment, or how I’m feeling now. I’ve been feeling optimistic and curious while posting in this thread.

                                                        The questions I asked aren’t rhetorical; I’d actually like readers to try answering them.

                                              1. 5

                                                Wonderful article! It articulates and supports the concept extremely well.

                                                It’s also very relevant to the programming language I’m designing. It supports the full range of substructural types, with full type inference of required capabilities (using a type class / traits system). This means, for example, that code that does not move values on the stack(s) will not require the “Move” trait, and can thus be trivially compiled to use stack allocation.

                                                And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell. All such capabilities/effects will be infered and tracked by the type system.

                                                1. 1

                                                  And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

                                                  Are you referring an API like this?

                                                  getLine : s -o (s, String)
                                                  putLine : String -> s -o s
                                                  
                                                  main : s -o (s, Int)
                                                  main = ...
                                                  
                                                  1. 1

                                                    It’s a (the first?) multi-stack concatenative language, so you don’t have to explicitly thread linear values through every function call. All functions are inherently linear functions from one state of the (implicit) multi-stack to the next state of the multi-stack.

                                                    The surface syntax will likely be simpler, but the fully explicit type for a “print line” term will be something like:

                                                    (forall s1, s2, w WriteStr . $: s1 * Str $stdout: s2 * w -> $: s1 * Str $stdout: s2 * w )
                                                    

                                                    Where the identifiers between $ and : are the stack identifiers (empty for the default stack), the variable w is qualified by the WriteStr trait, and the stack variables s1 and s2 can expand to any product type.

                                                    The stack variables make the function (multi-)stack polymorphic. If you’re interested in learning more about stack polymorphism, I highly recommend this blog post from Jon Purdy: http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html?m=1

                                                    In order to clone, drop, or move a value, it must possess the Clone, Drop, or Move trait, respectively. In this way, we can support the full range of substructural types, making the type system quite expressive! This makes it possible to express things like a file handle that must be opened before being written to and an open file handle that must be closed before it can be dropped.

                                                    1. 1

                                                      That’s a dense-looking type signature. Let me try to make sense of it:

                                                      So ‘print line’ transitions two stacks - the default stack and a stack called stdout. ‘print line’ expects a Str on top of the default stack, and a WriteStr dictionary on the stdout stack (which I assume contains functions that can actually make a system call to print to stdout). ‘print line’ does not change the default stack or the stdout stack.

                                                      Is that right?


                                                      Purity for applicative languages is something like equivalence between programs f <x> <x> and let y = <x> in f y y for all f and <x> (where <x> is some source code, like 1 + 2 or print("hello")).

                                                      What’s the equivalent in a concatenative, stack-based language? I’d guess an equivalence between programs f <x> <x> and f dup <x>. It doesn’t look like your ‘print line’ has this property. (I might be wrong about what to expect from a pure concatenative language, though.)

                                                      The reason I bring this up is because given what you’ve told me, I don’t think that this is true of your language yet:

                                                      And linear types will be used for I/O, so that all functions remain pure, but you don’t need the complexity of Monads like in Haskell.

                                                      1. 1

                                                        Is that right?

                                                        Essentially, yes. (There’s a minor detail that the WriteStr dictionary is not actually on the stack. Rather a type that implements the WriteStr trait is on the stack. In this case, it’s effectively the same thing, though.)

                                                        Purity for applicative languages is something like equivalence between programs f and let y = in f y y for all f and (where is some source code, like 1 + 2 or print(“hello”)).

                                                        What’s the equivalent in a concatenative, stack-based language?

                                                        Let’s make the example concrete. In Haskell you would have

                                                        f (putStrLn "hello") (putStrLn "hello")
                                                        

                                                        is equivalent to

                                                        let y = putStrLn "hello" in f y y
                                                        

                                                        In this linear concatenative language, you might correspondingly have

                                                        ["hello" putStrLn] ["hello" putStrLn] f
                                                        

                                                        is equivalent to

                                                        ["hello" putStrLn] clone f
                                                        

                                                        Note that terms between brackets are “quoted”. That is, they are pushed onto the stack as an unevaluated function, which can be later evaluated using apply.

                                                        I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

                                                        Edit: To be clear, cloning is only allowed here because both "hello" and putStrLn, and thus ["hello" putStrLn] implements the Clone trait. If you tried to clone a function that closed over a non-Clone type, then it would not type check.

                                                        Edit 2: And because it might not be clear where linearity comes in, the value on top of the $stdout stack that implements WriteStr is linear. So when you go to evaluate putStrLn, you must thread that value through the calls. Thanks to the multi-stack concatenative semantics, such value threading is completely implicit.

                                                        This has an advantage over Haskell’s monad system in that there’s no need for monad composition and all the associated complexity (which I have a lot of trouble wrapping my head around).

                                                        1. 2

                                                          I believe this fulfills the notion of purity you’re describing, unless I’m misunderstanding something.

                                                          That seems right.

                                                          Thanks for the chat! You’ve given me a lot to think about.

                                                  2. 1

                                                    Neat! Is there a website?

                                                    1. 1

                                                      No, not yet. It will probably be a bit before it’s ready for that, but I do hope to get there as quickly as possible.

                                                  1. 6

                                                    Maybe I woke up on the wrong side of the bed this morning, or maybe it’s because I haven’t had a coffee yet, but I don’t appreciate the indignant style of this article. Like, it’s possible to convey the same informational content without being so derisive.

                                                    That said, these seem like legitimate shortcomings and I’m glad that I’m now aware of them. Screen readers and other accessibility tooling are not normally on my mind, so I can imagine that I would’ve done the same thing as the Flutter devs here.

                                                    Regarding Flutter non-Web: I occasionally dabble with mobile programming, and I found the Flutter Android experience so much better than trying to build the equivalent functionality “the Android way”.

                                                    1. 1

                                                      I think it’s fine to call hot garbage what it is (if that’s the authors opinion, for instance).

                                                      It’s important to remember that not everyone on this site comes from an US-style – “somebody could be offended”, “it’s not the opinion of my employer” etc. – cultural background.

                                                      1. 5

                                                        It’s important to remember that not everyone on this site comes from an US-style – “somebody could be offended”, “it’s not the opinion of my employer” etc. – cultural background.

                                                        I don’t know what this angle is, but it’s not where I’m coming from. It’s about how I prefer to receive criticism. If someone criticised my project in the way the author did Flutter, I’d be more likely to say “fuck you” and walk away. I would have to put in extra effort to extract the useful information and act on it. There are other more constructive ways to deliver the same information, where instead of feeling defensive, I’ll actually feel grateful that they told me.

                                                        I’m also wary of a pattern in tech circles where people use “shitting on tech” as a way to signal status, and this article gives me those kinds of vibes. It’s easy to talk shit about tech, but it’s not so easy to build tech that’s worth shit-talking.

                                                        1. 2

                                                          You are acting like this is some random hobbyist project by a lone developer. It’s not.

                                                          It’s a project done by Google and considering how Google’s monopoly position ensures that every decision they make will affect billions of people, they deserve every little bit of scrutiny.

                                                          Let’s not even act like “text selection is broken”, “the DOM is gone”, “you cannot inspect the page anymore” weren’t explicit decisions demanded and signed off at the appropriate management level at Google:

                                                          You are not that stupid. I’m not that stupid. The author of that blog post isn’t that stupid. Let’s stop humanizing tax-dodging, multi-billion dollar companies by pretending they have feelings, while at the same time absolving them of any moral or ethical responsibility.

                                                    1. 6

                                                      One of Go’s claims to fame and primary design goals is compilation speed.

                                                      Interesting that the author brings up go the language but then doesn’t discuss the tooling that so many others have praised, even though it seems like to would be the kind of innovation that the author is looking for.

                                                      Erlang, with its advanced, concurrent, message-passing runtime, offers a better way to build high availability, highly reliable systems.

                                                      This point also interests me, because I’ve never thought about a programming languages run time as part of the environment, but more as part of the programming language itself. Except for C, it’s pretty rare to have a language divorced of it’s runtime, so should it really be considered as environment rather than just part of the language?

                                                      Does the compile-time / run-time distinction make sense to have? Can we give better control to developers about when their code is executed during the software development and deployment process?

                                                      In one word, yes. This is still one of the easiest ways to amortize the cost of some optimizations before giving it to the end user.

                                                      1. 2

                                                        Interesting that the author brings up go the language but then doesn’t discuss the tooling that so many others have praised, even though it seems like to would be the kind of innovation that the author is looking for.

                                                        Taking a step in this direction: I think go’s executable examples are really cool. As part of a function’s documentation, you can write a usage example and its expected output, and the example will be compiled and checked against the desired output as part of the package’s test suite.

                                                        1. 1

                                                          So I actually wasn’t aware of this paradigm before, though it does explain why it’s so much easier to find example uses of Go methods in the docs. I have to wonder though, how does one then write examples for code with side effects? Otherwise, the example in boto for how to instantiate an EC2 server could get rather expensive

                                                        2. 2

                                                          This point also interests me, because I’ve never thought about a programming languages run time as part of the environment, but more as part of the programming language itself.

                                                          I think it’s usually a mix of both; the runtime environment affects the language and vice versa. But they aren’t necessarily the same thing.

                                                          Take Kotlin, for example. Its design reflects the semantics of the JVM to some extent. But it’s no longer a JVM-only language; you can target the JVM, JavaScript, or compile to native code. So is the JVM runtime environment part of the Kotlin language when you can run Kotlin code on a Node runtime instead?

                                                          1. 3

                                                            So, in true internet fashion, I’m going to agree and disagree :)

                                                            You do bring up a good point about polyglot back ends for languages. The JVM and the official Java runtime is not part of the language for this exact reason. However, Kotlin does make a significant amount of work to make it feel like we’re still in a Java-like environment, even when we’re int he Node VM or on a native machine. So while there Java runtime is not a part of it, there is “some” runtime imbued into how the language itself.

                                                        1. 3

                                                          What’s the rationale for wanting HKTs in Rust? I see a lot of people wanting to drag Rust closer to Haskell, but I don’t understand why.

                                                          My personal opinion is that adding HKTs to Rust would further complexify Rust — which is already nearly busting its strangeness budget with the ownership system — and offer little practical value in return. The only place where we might want such flexibility is with arrays, and this is being addressed with const generics.

                                                          1. 4

                                                            I think HKT are one of the things that shouldn’t be missing in modern languages – it’s one of the few features where I gladly pay the complexity cost (alongside turbo-charged if-expressions to get rid of separate pattern matching constructs).

                                                            I don’t think you can blame HKT’s for blowing up Rust’s complexity budget – they could have easily avoided earlier mistakes to not push their complexity budget to the current point. (Hello ::<>!)

                                                            1. 4

                                                              Turbofish is just a small syntax-level quirk. I don’t think it’s even in top 20 of complexity in Rust.

                                                              There are much worse things, like patterns trying to be dual of expressions, which made & mean either reference or dereference depending on context (and in function and closure args, that context is hard to notice). Or the very subtle meaning of implied 'static bounds. Especially when the compiler says something has to live for a 'static lifetime, which isn’t the same thing as lifetime of a &'static reference.

                                                              1. 0

                                                                Turbofish is just a small syntax-level quirk.

                                                                I like the ::<> example because it fits the “easily avoided” descriptor best.

                                                                (Rust got it right in the beginning, then changed it to current worse “design”.)

                                                                I don’t think it’s even in top 20 of complexity in Rust.

                                                                Yep, but if “getting these trivial-to-get-right things wrong” gets a pass, the floodgates are pretty much open.

                                                                A language community either decides “yes, we have a quality standard, and we enforce it” or it doesn’t; there is little in between.

                                                                There are much worse things, like patterns trying to be dual of expressions, which made & mean either reference or dereference depending on context (and in function and closure args, that context is hard to notice). Or the very subtle meaning of implied ’static bounds. Especially when the compiler says something has to live for a ’static lifetime, which isn’t the same thing as lifetime of a &’static reference.

                                                                Are there any write-ups that document “and this is how we should have done it properly, if we were allowed to”, because otherwise the mistake will be repeated over and over.

                                                                1. 1

                                                                  You’ve got Turbofish entirely backwards. It is an intentional design that fixed a parsing ambiguity. C++ doesn’t have such disambiguator, and has well-documented problems with parsing < in templates vs comparisons & shifts. Rust avoided having such mistake, and it was a conscious design decision. It fixed a fundamental problem in the grammar at cost of a quirk that is just an aesthetic thing.

                                                                  Putting “design” is scare quotes makes you sound like a troll.

                                                                  1. 3

                                                                    And the turbo-fish operator does not significantly affect how people interact with the language — HKTs would.

                                                                    I used to be into Haskell and I always wanted to abstract my code more and more (for what reason, I do not know). As I’ve gotten older, I’ve gotten grumpier and more conservative in what I want out of a programming languages: I now put more value in simplicity and solutions that solve an actual problem rather than a generalization of a problem.

                                                                    My worry is that Rust will become too complex if we say yes to every proposition for a new way to create abstractions. My 27 year-old self would probably be ecstatic, but my 37 year-old self is dubious that this is desirable.

                                                                    Rust is already taking quite a gamble that programmers will accept to change the way they work with memory in exchange for safety; I would hate for that gamble to become riskier by introducing every GHC extension into Rust.

                                                                    1. 1

                                                                      I’m not sure what you are trying to argue against.

                                                                      Picking a design that does not require working around parsing ambiguities as Rust did in the beginning is vastly superior to having 4 different syntax variations for generics and pretending that’s a good thing.

                                                              2. 1

                                                                I like that ‘strangeness budget’ idea. I think we agree on why it would be dubious to add higher-kinded types to Rust.

                                                                I see a lot of people wanting to drag Rust closer to Haskell, but I don’t understand why.

                                                                I’m not advocating to make Rust more like Hakell, but the reason why I want a language “like Rust, with higher-kinded types” is because I think such a language is strictly better for building robust, maintainable software. To me, Rust is strictly better than C in the same way. I find it hard to believe that we have this all figured out after only ~70yrs of innovation.

                                                              1. 1

                                                                I haven’t finished the article yet, and I suspect it may end up going over my head, but I appreciate the clear explanation of the terminology in the first half of the post. I really feel like I’ve learned something. Thanks!

                                                                1. 2

                                                                  You’re welcome, I’m glad you feel that way :)

                                                                1. 6

                                                                  My positive takeaways:

                                                                  • Be careful about making people feel bad for being different. There’s no need to tease people about their hardware, OS, or editor.
                                                                  • Be careful about making people feel bad for lacking knowledge. https://xkcd.com/1053/ comes to mind: everyone needs to learn a thing for the first time, so we should make that process enjoyable.

                                                                  My negative takeaway: I see the author describing all the things they don’t know in a ‘joking’ or ‘flippant’ tone, and end up telling myself the story that they might be “wearing their ignorance of the topics as a badge”. The idea of joking about “things one don’t know” makes me pretty uncomfortable.

                                                                  1. 7

                                                                    I basically want a DVCS that doesn’t operate on text files, but on a proper data model like relational algebra or algebraic datatypes.

                                                                    Same! And <programming language> syntax trees are another such data model.

                                                                    1. 1

                                                                      I do wish we had this built into text editors! I picture https://github.com/lambdaisland/deep-diff, but for every AST. I think it would help catch a whole class of bugs, too!

                                                                    1. 3

                                                                      I solved this problem a while ago and wrote some notes on it.

                                                                      “Equality & Identity”: Overview, Problems, Solution, Fixing Haskell, Implementation in Dora


                                                                      One mistake in the article though:

                                                                      You want the developer to see === and think “reference equality,” not “more equals signs is better.”

                                                                      No, you don’t want this. You want === to stand for identity, such that it can have a definition that works consistently across all types, reference and value. Both sane handling of floating point values as well as the ability to freely change types from reference types to value types (and back) depend on it.

                                                                      Some minor nipicks:

                                                                      programming languages should make it simple to create types where equality comparison is disabled …

                                                                      It’s probably easier to pick sane defaults and stop the runtime from implementing random methods “for convenience”.

                                                                      … they should use this feature in their standard library where needed, such as on floating point types

                                                                      Not really seeing the reason for this.


                                                                      Overall a nice article with an entertaining description of an interesting problem!

                                                                      1. 2

                                                                        Your “fixing Haskell” article does not fix anything. The actual fix is to make Float and Double not implement the Eq type class, period.

                                                                        One thing Standard ML does right is using different functions for comparing eqtypes (e.g. int, bool, tuples of eqtypes, etc.) and floating points (i.e. real). When you use the former, you can expect all the usual mathematical properties of equality to hold. When you use the latter, you know what you are getting into.

                                                                        1. 1

                                                                          Your “fixing Haskell” article does not fix anything.

                                                                          I think the article does a good job to explain why you’re wrong.

                                                                          The actual fix is to make Float and Double not implement the Eq type class, period.

                                                                          Why not just drop floating point values altogether, “language design is hard, let’s go shopping”-style?

                                                                          When you use the former, you can expect all the usual mathematical properties of equality to hold. When you use the latter, you know what you are getting into.

                                                                          That’s exactly what happens – you specify the properties you need:

                                                                          If you require identity and the guarantees it provides, e. g. for keys of maps, simply demand that the type implements it.

                                                                          class HashMap[K: Identity + Hash, V] { ...}
                                                                          // Double implements Identity
                                                                          let map = HashMap[Double, String]()
                                                                          map.put(Double.NaN, "found me")
                                                                          // Lookup works, even for NaN:
                                                                          assert(map.get(Double.NaN) == Some("found me"))
                                                                          
                                                                          1. 1

                                                                            I think the article does a good job to explain why you’re wrong.

                                                                            Your article literally asks for the Eq class to expose two distinct equality testing functions. What can be more broken than that? I am having a lot of trouble imagining it.

                                                                            If you require identity and the guarantees it provides, e. g. for keys of maps, simply demand that the type implements it.

                                                                            Using a type class to constrain the key type of a map is in itself broken. Suppose for the sake of the argument that you have a type that can be totally ordered in two different ways, both useful. (I could have also said “hashed in two different ways, both useful”, but that is less realistic.) It is perfectly reasonable to want to construct two distinct ordered map types, one for each total order. With a type class constraint on the map’s key type, you cannot this, because type classes must have canonical instances. (Or you could destroy canonicity at your own peril, as Scala does.)

                                                                            The correct solution is to do exactly what ML does:

                                                                            signature ORDERED =
                                                                            sig
                                                                                type key
                                                                                val compare : key * key -> order
                                                                            end
                                                                            
                                                                            functor TreeMap (Key : ORDERED) :> MAP
                                                                                where type key = Key.key =
                                                                            struct
                                                                                (* ... *)
                                                                            end
                                                                            

                                                                            The map abstraction is not parameterized just by the key type but rather by the whole ordered type structure, of which the naked key type is but one component. This is exactly how things should be.

                                                                            Applying this lesson to your hash map example, we have

                                                                            signature HASHABLE =
                                                                            sig
                                                                                type key
                                                                                val equal : key * key -> bool           (* not necessarily ML's built-in equality! *)
                                                                                val hash : key -> int
                                                                            end
                                                                            
                                                                            functor HashMap (Key : HASHABLE) :> MAP
                                                                                where type key = Key.key =
                                                                            struct
                                                                                (* ... *)
                                                                            end
                                                                            

                                                                            EDIT: Fixed formatting.

                                                                            1. 1

                                                                              Your article literally asks for the Eq class to expose two distinct equality testing functions. What can be more broken than that? I am having a lot of trouble imagining it.

                                                                              As mentioned in the article, you could also introduce a new trait for it.

                                                                              I think the SML approach is well-known to those who thought about non-canonical instances in Haskell and Scala. (Rust seems to enforce uniqueness of instances, so that’s another interesting data point.)

                                                                              The core question is whether the price one would pay for this feature is worth the cost – I wouldn’t say so; that’s why I’m approaching this by splitting things into better-defined typeclasses to relieve the pressure on needing multiple instances of the same typeclass even for trivial things.

                                                                              For instance for numbers, you don’t need to engage in crazy contortions anymore (like in Haskell) to swap behavior between floats that compare nicely in the domain (NaN != NaN) and floats that can be put and retrieved from data structures.

                                                                              1. 2

                                                                                I think the SML approach is well-known to those who thought about non-canonical instances in Haskell and Scala.

                                                                                Haskell’s designers know very well the importance of having canonical instances for type classes to work correctly, as well as the inherent conflict between canonicity of instances and modularity.

                                                                                And yet Haskell has the -XIncoherentInstances extension.

                                                                                The core question is whether the price one would pay for this feature is worth the cost

                                                                                The payoff is huge. The type checker will stop you immediately if you conflate two structures constructed from different instances on the same type. For example, suppose you have two ordered sets and want to merge them. The obvious O(n) algorithm only works correctly if both ordered sets are ordered using the same total order on the element type. But Scala will silently allow you to use the wrong order on one of the sets, due to the way implicits work when used as type class instances.

                                                                                that’s why I’m approaching this by splitting things into better-defined typeclasses

                                                                                The problem with your “better-defined” type classes is that many of them will support similar operations. In fact, similar enough that you will want to define the same algorithms on them. But, since your “better-defined” type classes are not literally the same as each other, as far as the type checker cares, you still need to reimplement the same algorithms over and over, which defeats the point to using type classes in the first place.

                                                                                Another way to work around this issue is to use newtype wrappers to mediate the conversion between instances. But that is disgusting in its own way. For example, before Applicative became a superclass of Monad, Haskell used to have a WrappedMonad newtype as a contrived way to get an Applicative instance from a Monad one.

                                                                                1. 1

                                                                                  This comment was pitched at exactly my level of type knowledge - thanks!

                                                                                  1. 1

                                                                                    And yet Haskell has the -XIncoherentInstances extension.

                                                                                    … which you don’t even need. GHC allows defining incoherent instances without it.

                                                                                    For example, suppose you have two ordered sets and want to merge them. The obvious O(n) algorithm only works correctly if both ordered sets are ordered using the same total order on the element type.

                                                                                    Scala supports this with path-dependent types, but the syntactical and mental overhead never felt worth it.

                                                                                    Anyway, I wasn’t aware that SML supported applicative functors (which I think are a requirement to make this work) – when did this change?

                                                                                    A conservative approximation could also be done at the value-level – if typeclass instances were first-class values (but I don’t think that’s worth it).

                                                                                    But, since your “better-defined” type classes are not literally the same as each other, as far as the type checker cares, you still need to reimplement the same algorithms over and over, which defeats the point to using type classes in the first place.

                                                                                    Agreed. I just think that’s vastly preferable to people using one (or a few) typeclasses as random dumping grounds until these typeclass has no laws left (see Eq, Ord in Haskell).

                                                                                    Another way to work around this issue is to use newtype wrappers to mediate the conversion between instances. But that is disgusting in its own way.

                                                                                    Ick, that’s way more ugly than the “recommended” approach in Haskell; that is wrapping the data types and reimplementing the typeclass instance (e. g. TotalDouble wrapping Double to implement Eq and Ord differently from Float).

                                                                                    1. 1

                                                                                      Scala supports this with path-dependent types, but the syntactical and mental overhead never felt worth it.

                                                                                      Scala’s troubles arise from the desire to make everything first-class. “Objects with type members are basically first class modules, right?” It sounds good until you realize it does not interact well with type inference. Knowing what to make second-class is an art.

                                                                                      Anyway, I wasn’t aware that SML supported applicative functors (which I think are a requirement to make this work) – when did this change?

                                                                                      Standard ML does not have applicative functors. Making applicative functors work correctly requires a notion of “pure functor body”, which no ML dialect has as far as I can tell. (OCaml’s applicative functors create broken abstract data types when their bodies are impure.)

                                                                                      In any case, using applicative functors is not the only way to make this work. Another way is to arrange your code so that you do not need to apply the same functor to the same module twice. Unfortunately, this requires a way of thinking that is not very compatible with how either object-oriented or functional programmers design their libraries.

                                                                                      I have two cardinal rules for library design.

                                                                                      Do not expose higher-order interfaces when first-order ones will do.

                                                                                      As a counterexample, consider the Haskell API for maps:

                                                                                      empty :: Map k a
                                                                                      lookup :: Ord k => k -> Map k a -> Maybe a
                                                                                      insert :: Ord k => k -> a -> Map k a -> Map k a
                                                                                      delete :: Ord k => k -> Map k a -> Map k a
                                                                                      {- more functions -}
                                                                                      

                                                                                      This is higher-order than necessary. Essentially, lookup, insert and delete are functors that are applied to an Ord structure every time they are called. Only from the canonicity of Ord instances can you argue that the effect of these multiple applications is equivalent to a single application of a functor that produces a map type whose key type is monomorphic.

                                                                                      Have you gained anything? Strictly speaking, yes. You have gained the ability to tell your users that the implementation code for inserting into a Map Int a and into a Map String a is really the same.

                                                                                      Is this something the user cares about? IMO, no. If I am working with maps with 64-bit integer keys, I do not care whether the underlying implementation is a generic red-black tree (which would work just as well with any other key type) or a radix tree (which requires lexicographically ordered bit vectors as keys).

                                                                                      The right interface, as previously stated, has an abstract but monomorphic key type:

                                                                                      type Key
                                                                                      type Map a
                                                                                      
                                                                                      empty :: Map a
                                                                                      lookup :: Key -> Map a -> Maybe a
                                                                                      insert :: Key -> a -> Map a -> Map a
                                                                                      delete :: Key -> Map a -> Map a
                                                                                      

                                                                                      Do not enforce two unrelated sets of invariants in the same module.

                                                                                      According to most standard libraries, the obvious way to implement ordered maps is something like:

                                                                                      functor TreeMap (Key : ORDERED) :> MAP
                                                                                          where type key = Key.key =
                                                                                      struct
                                                                                          (* hard-coded implementation of red-black trees or what have you *)
                                                                                      end
                                                                                      

                                                                                      Never mind that you have to reimplement red-black trees from scratch if you want to implement sets as well. This design is bad for a more fundamental reason: you cannot debug the red-black tree implementation without tearing down the map abstraction!

                                                                                      The problem here is that TreeMap enforces two unrelated sets of invariants:

                                                                                      1. Trees are correctly balanced.
                                                                                      2. The in-order traversal of a tree produces a strictly ascending sequence of keys.

                                                                                      The first invariant actually does not depend on the key type, so it can be factored out into a separate module:

                                                                                      signature SEARCH_TREE =
                                                                                      sig
                                                                                          type 'a tree
                                                                                          (* Manipulate self-balancing search trees using zippers.
                                                                                           * The user must know how to look for a given element. *)
                                                                                      end
                                                                                      
                                                                                      structure RedBlackTree :> SEARCH_TREE = (* ... *)
                                                                                      

                                                                                      Once we have a working search tree implementation, we can finally implement maps:

                                                                                      functor TreeMap
                                                                                          ( structure Key : ORDERED
                                                                                            structure Tree : SEARCH_TREE ) :> MAP where type key = Key.key =
                                                                                      struct
                                                                                          type key = Key.key
                                                                                          type 'a map = (key * 'a) Tree.tree
                                                                                          (* ... *)
                                                                                      end
                                                                                      

                                                                                      The same lesson can be applied to hash maps:

                                                                                      signature TABLE =
                                                                                      sig
                                                                                          type 'a table
                                                                                          (* Operations on a fixed size table, including collision resolution.
                                                                                           * The user must know where to start looking for a given element. *)
                                                                                      end
                                                                                      
                                                                                      structure LinearProbing :> TABLE = (* ... *)
                                                                                      structure QuadraticProbing :> TABLE = (* ... *)
                                                                                      
                                                                                      functor HashMap
                                                                                          ( structure Key : HASHABLE
                                                                                            structure Table : TABLE ) :> MUTABLE_MAP where type key = Key.key =
                                                                                      struct
                                                                                          type key = Key.key
                                                                                          type 'a map = (key * 'a ref) Table.table ref
                                                                                          (* ... *)
                                                                                      end
                                                                                      
                                                                          2. 1

                                                                            I’m getting a bit confused by they way you contrast ‘equality’ and ‘identity’. To me they mean the same thing, and your article is suggesting === for equality and == for equivalence. This feels in line with your point, because there are many different ways to define equivalence for any particular type.

                                                                            1. 2

                                                                              Yes – the naming is largely driven by the desire to not have two things (where differences are important) behind two names that are quite similar.

                                                                            2. 1

                                                                              I really like your series; I’ll update my post to have a reference to it.

                                                                              It’s not clear to me that having === mean two different things for value type and reference types is an improvement (it’s also not clear to me that it isn’t an improvement, either; I’m not sure). It would be interesting to study this in a human factors evaluation of a programming language (like the Quorum language) to see what makes the most intuitive sense to developers. Certainly what you propose is at least lawful which is an improvement over the current state of things!

                                                                              My reasoning for not wanting equality on floats is the same as SML’s for not having it, which I cite in the article. A reasonable comparison on a float requires an epsilon value, which doesn’t fit into the == operator usage.

                                                                              FYI, the Implementation in Dora post seems to be empty.

                                                                              Thanks!

                                                                              1. 2

                                                                                I really like your series

                                                                                Thanks, I appreciate it! :-)

                                                                                It’s not clear to me that having === mean two different things for value type and reference types is an improvement

                                                                                My argument is that it stands for the exact same thing in both reference and value types: “is this thing here identical to that thing over there”?. Sure, we know this as “reference equality” on reference types, but it’s just as crucial on value types. Some may call it “ability to find your floats in your collections again equality”.

                                                                                Here is another article on how == vs. === relates to the various orderings the IEEE754 spec defines: Comparing & Sorting

                                                                                A reasonable comparison on a float requires an epsilon value, which doesn’t fit into the == operator usage.

                                                                                I disagree on that. 80% of the comparisons on floating point values are with “magic values” like 0.0, 1.0, NaN etc. that were not exposed to any kind of arithmetic computation before, and therefore can not foil the comparison.

                                                                                FYI, the Implementation in Dora post seems to be empty.

                                                                                Yes, I left out the link, because I haven’t written it yet. :-)

                                                                                Here is a unit test, that might demonstrate the semantics.

                                                                                1. 2

                                                                                  I would argue that if you have a comparison which is only good for “magic values” then what you really have is a bunch of useful functions: isNaN, isZero, etc. I will note that == NaN is going to fail anyway!

                                                                                  1. 1

                                                                                    I will note that == NaN is going to fail anyway!

                                                                                    Which is exactly where === comes in. :-)

                                                                            1. 9

                                                                              This is just strange. Why do they fail to mention the most damning flaw of C++: unbridled, unmanageable complexity? Why don’t they mention the churn the languages is undergoing now with major new language features in every new language standard that drastically change how idiomatic c++ is written?

                                                                              Why is it claimed that you can write ‘straight line concurrent code’ in Rust but not C? You can write a fully functional fibre system for C that is essentially transparent for code using it (doesn’t need to know it is running in a fibre except that you can’t block) in less than a thousand lines of code. You don’t need async/await nonsense plastered over your code and to duplicate every bit of code into async and normal functions. That seems just as “straight line” than Rust to me.

                                                                              1. 9

                                                                                This is just strange. Why do they fail to mention the most damning flaw of C++: unbridled, unmanageable complexity? Why don’t they mention the churn the languages is undergoing now with major new language features in every new language standard that drastically change how idiomatic c++ is written?

                                                                                “C++ at Google” is generally considered different from “C++ in the wild”. They have a rather strict coding standard, with tooling to enforce it, plus a huge toolstack and practice built around it.

                                                                                You don’t need async/await nonsense plastered over your code and to duplicate every bit of code into async and normal functions. That seems just as “straight line” than Rust to me.

                                                                                You do know that fuchsia engineers were a huge driver behind the actual async/.await implementation? They have need for it.

                                                                                1. 1

                                                                                  That’s something I didn’t consider actually. Google C++ is a pretty small subset of C++. That also explains why they ignored C++‘s co_await stuff but mentioned Rust’s: they don’t use new C++ features and they helped drive the implementation of Rust’s.

                                                                                  1. 1
                                                                                    1. 1

                                                                                      C++: “Con: Support for asynchronous programming is weak.”

                                                                                      Rust: “Pro: Asynchronous programs can be written using straight-line code.”

                                                                                      Even though C++ obviously has much more mature support for a hugely wide variety of asynchronous programming models, as well as supporting co_await, while Rust only supports the equivalent of co_await (and not very well yet, as it was basically just introduced recently, though that should improve over time).

                                                                                      That looks like ignoring it to me.

                                                                                2. 7

                                                                                  This is just strange. Why do they fail to mention the most damning flaw of C++: unbridled, unmanageable complexity? Why don’t they mention the churn the languages is undergoing now with major new language features in every new language standard that drastically change how idiomatic c++ is written?

                                                                                  Do they just understand and accept this complexity? Large C++ projects are not new to Google, just look at Chrome. From their perspective it’s more likely to be considered the norm and necessary complexity.

                                                                                  It seems strange to outsiders, but culturally it is probably the most palatable option with known results. Choosing the known thing is probably much safer. I’m sure lots of the developers have probably had the same thoughts as you, but they feel that this approach has the impact they desire.

                                                                                  1. 4

                                                                                    You can write a fully functional fibre system for C that is essentially transparent for code using it (doesn’t need to know it is running in a fibre except that you can’t block) in less than a thousand lines of code.

                                                                                    I’m curious what that would look like. Do you have any resources on hand?

                                                                                  1. 2

                                                                                    I won’t argue that booleans are better than enums. However, the reason we keep gravitating to booleans is that they are lightweight, even though they end up costing more later. Btw: that cost isn’t just to the author, it’s to everyone else who has to remember which enum we’re using, where it’s defined, etc. If it’s a sufficient pervasive enum, that’s ok, but there’s a gap where the enum doesn’t represent a concept with a clearly defined home.

                                                                                    In some cases, you almost want to define the enum inline. Here’s a hypothetical syntax. It’s probably not actually good, but I want that level of simplicity.

                                                                                    function fetch(int accountId, ::IncludeDisabled, ::History, {::Shallow, ::Full, ::IncludeRelations}) {
                                                                                    }
                                                                                    
                                                                                    // includes details, doesn't include history or disabled accounts
                                                                                    fetch(0)
                                                                                    
                                                                                    // includes historical records, excludes disabled accounts, only ids/links
                                                                                    fetch(0, ::History, ::Shallow)
                                                                                    // ditto, but includes normal data
                                                                                    fetch(0, ::Full)
                                                                                    // ditto, but includes some kind of related data
                                                                                    fetch(0, ::IncludeRelations)
                                                                                    
                                                                                    1. 2

                                                                                      ‘Inline enums’ are a great idea :) Their generalisation - anonymous sum types - is often called a ‘variant’ if you’re interested in reading up about it.

                                                                                    1. 1

                                                                                      After some discussion on IRC, I think I should add that this isn’t about the string. (Various “join string with some delimiter” functions exist.) It’s more about calling FunctionA an even number of times and FunctionB an odd number of times… So, maybe I build my string by calling AddNextLetter() and AddDelimiter(). Whatever.. :)

                                                                                      1. 5

                                                                                        Here’s my program:

                                                                                        one();
                                                                                        two();
                                                                                        one();
                                                                                        two();
                                                                                        one();
                                                                                        two();
                                                                                        one();
                                                                                        

                                                                                        To me there are three ‘natural’ ways to factor out the repetition.

                                                                                        ‘It is a call to one() followed by three calls to two(); one();’:

                                                                                        one();
                                                                                        thrice {
                                                                                          two();
                                                                                          one();
                                                                                        }
                                                                                        

                                                                                        ‘It is three calls to one(); two() followed by a call to one()’:

                                                                                        thrice {
                                                                                          one();
                                                                                          two();
                                                                                        }
                                                                                        one();
                                                                                        

                                                                                        ‘It is seven function calls, and the exact function depends on where we are in the sequence’:

                                                                                        for i in 0..7 {
                                                                                          match even(i) {
                                                                                            true => one();
                                                                                            false => two();
                                                                                          }
                                                                                        }
                                                                                        
                                                                                        1. 1

                                                                                          Ah, this is conceptually the same as @alva’s answer. And yours works in Java, too.

                                                                                          So, for either of these, I need to arrive at “7” programmatically. In my case, here is the expression that will achieve that: thing.length * 2 - 1. :)