1. 63
    1. 45

      Here’s my take on it:

      test "example" {
      fn foo() i32 {
          return 1234;
      ./test.zig:2:8: error: expression value is ignored
      1. 5

        Is this what she meant? I assumed they weren’t validating the output of the function before doing something with it. Not that they weren’t using it.

        Specifically, loading this new config put a NULL/nullptr/nil/None type thing into the value, and then something else tried to use it. When that happened, the program said “whoops, can’t do that”, and died.

        They obviously are using a dynamically typed language or Java (as much as Rachel belabors the point that she’s not calling out the language for the blame) that allowed the return value to have a null field when it really shouldn’t have.

        1. 3

          They obviously are using a dynamically typed language or Java

          Foo * f = g();

          happens in plenty of staticly typed languages that aren’t Java and could be considered as “not validating the return value” in this sense.

          Even in say Haskell you can do

            f (fromJust (g h))

          if you choose.

          1. 1

            You are, of course, technically correct. But my statement was context-driven by the assumption that this is backend web code. That’s most likely not C, C++, or Haskell. It’s most likely PHP, JavaScript, Java, Python, or Go.

            I guess this could happen in Go, though. So I’ll change my statement to “They’re almost definitely using a dynamically typed language or Java or Go.” ;)

            And I struggle to believe that this would happen with a language like Haskell anyway. I know it can, but I’d put money down that they weren’t using Haskell or Rust or Ocaml or Kotlin or Swift.

            1. 7

              You’re missing the whole point. Language is not the problem.

              The problem is the culture.

              Their culture was “somebody else have me incorrect input so it’s their fault if the business went down”.

              Which is complete nonsense. And you’re bikeshedding on the language when it was explicitly said to ignore the language.

              The bigger discussion, possibly even a philosophical discussion, is what do we do when we get incorrect input? Do we refuse to even start? Can we reject such input? Shall we ignore the input? And if we ignore it will the software behave correctly anyway for the rest of the correct input (configuration or whatever) ?

              1. 1

                Yeah, my comment about language choice was somewhat tangential. Originally, I wasn’t even sure if we were all reading the same thing from the article. The user I replied to made a point about not using a return result. My reading of the article lead me to believe that they just chose not to check the returned value. Based on her description of the issue being around null, I then made an off-the-cuff remark that I believe the only way that’s likely is because they are using a language that makes that kind of null-check “noisy”.

                Not at all disagreeing with the bigger picture. Yes, it’s a culture issue. Yes, we need to discuss validating input and whether input validation is sufficient.

                Language is a very small point, but I’m not sure it’s totally irrelevant. It’s a small problem that such checks are noisy enough that it makes developers not want to do it, no? I wish that she went into a little more detail about exactly what the devs aren’t checking.

        2. 3

          Don’t validate, parse. Validation errors are parsing errors and can be encoded as an Err result (in languages like Rust)

    2. 33


      My Hobby Horse.

      Let me ride it.

      I have fixed over the decades a vast number of bugs by the “One Simple Trick”.

      Grepping for every place where my predecessor had cast a return code to void (or switching on the gcc warnings that tell me where return codes are being ignored).

      And then turning every darn one into a YELL and crash now.

      Ooo. Looky you guys have been hiding creepy crawlies all over the place, let’s just fix them!

      See, now the software is rock solid instead of flaky sometimes doesn’t work for no reason.

      Now let’s add some subtlety.

      In the case the values are coming from an untrusted external system. aka The User. The config should have passed a stringent validation check before replacing the old.

      Sounds like they didn’t even have a validation check.

      You should be able to fuzz the hell out the validation checker and every config that passes should load successfully.

      Ah, but you’ve wired the “load config” to the “do stuff” haven’t you? Bad Idea. Config is Plain Old Data without behaviour, you should be able to round trip it without involving the rest of the system.

      Ok, if you get passed all that, yes you have a bug. The sooner you find it and fix it, the less shit downstream you have. In this case the result was dramatic and obvious… all died.

      A far far far worse scenario is it limped on with an undefined random value that resulted in corrupted data / transactions over months requiring manual check and correction of 1000’s of them.

      Better to die on start up, and if it dies, rollback to the previous config.

      Next is if system A dying kills system B, you’re lying if you think they’re independent standalone systems. Systems die, people trip over the power cord, diggers dig up cables, operators have carrots for fingers and potatos for brains… What is your recovery strategy to recover the whole system?

      Handling error returns? Careful there, dragons lie that way, complex code, resource leaks, more and more bugs….. and in the end? Do you know how to handle it? Or are you just passing it up and up and up hoping someone knows what to do…. and the only thing they do is drop it on the floor and hide the bug.

      Rather be honest, if you haven’t a tested and working plan to handle it? Crash Now and fix it!

      In an in production system a crash now and restart usually results in less loss of service than degrading into an flaky, untested, undefined state that no one knows about.

      1. 16

        I agree. Crash now, crash loud and tell me I’m an idiot. Don’t limp along doing your best.

        I’m looking at you, Javascript. I want to know that field is undefined, thank you very much.

      2. 9

        The Amazon Builders Library has a solid article on avoiding fallback. I highly recommend.

    3. 15

      What I miss in this article is that ‘it makes the code messy’ is actually a very good argument. The reply shouldn’t have been ‘you should have done it anyway’, but ‘why didn’t you invest in a way to check for errors without making the code messy?’

      And if the answer was ‘there was no budget for that project’, then you know the real source of the problem.

      1. 5


        Even more so: The author assumes that “had they just bit the bullet on the messy code and always checked, then everything would be fine.” But the cost has not been weighed – the additional maintenance cost of that messier code, the cost of other bugs resulting from its complexity.

        The thesis that remains to be proved is that the cost of this incident was greater than those other costs. This is not clear. The decision is presented as pure incompetence and folly, and while it might have been, it might not have been.

        Of course, there was probably a solution that wasn’t too messy and handled the error cases. Even in Go, there are creative solutions to this problem.

      2. 5

        I’m guessing the language is Go, so you can’t really make error checking non-messy, it’s by design.

        1. 8

          Not enough budget to rewrite the project in a language that actually supports features you require, like error handling.

        2. 3

          I don’t find error checking in Go messy at all. I find it readable and explicit and clear.

    4. 8
      1. [[nodiscard]] is one of the greatest things to ever happen in C++.
      2. Assume any pointer (or equivalent) you get from something could be null unless proven otherwise.
      1. 4

        Rust has #[must_use] which is its version of the same thing. Here’s an example for those who are curious.

        1. 5

          Swift has a similar thing, but it’s opt-out instead of opt-in.

          1. 1

            Nim also has must-use by default with an explicit discard to ignore. You can add a {.discardable.} pragma at the definition site to opt-out of this safety system if it is particularly foreseeable that using the return value is “optional”.

            1. 1

              I should have said as well that in Rust you can mark types as #[muse_use]. The Result type is marked must use in this way.

              1. 4

                I’m crowning Swift (and Nim and alike) winners here. I’ve done a sweep of Rust’s standard library functions once to find and fix ones that were missing #[must_use] and realized… almost all of them could be #[must_use]. There are exceptions, but they’re exceptions.

                1. 2

                  I agree. I made a post about it in the Rust subreddit and the response was very negative at the thought of having all functions be must_use by default. I was pretty disappointed because they all acted like the false-positives would be overwhelmingly noisy. But I can only assume they haven’t done any Swift (or Nim), because it’s almost never been an issue for me. There are a small handful of functions you’re going to write that return a value that’s so ignorable that it’s normal to not even bind it to something. And if that’s the case (such as the HashMap entry methods), you just opt-out and mark it ignorable…

                  1. 1

                    Yeah, on the spectrum of hive mind <--> war zone the /r/rust subreddit is a bit too close to “hive mind” for my tastes. It’s one reason why I quit reddit.

                2. 1

                  Eh, I don’t think it’s “winners” and “losers.” There is a trade-off to having more things be marked must use, that people may be more likely to turn off warnings related to it because they’re too noisy. I think Rust strikes a reasonable balance for the most part.

                  That said, to each their own!

                  1. 2

                    “people will turn off warnings” is not a big danger in Rust. Warning opt-outs are selective, explicit, and — most importantly — scoped. For discarding must_use there’s the let _ = pattern.

                    Swift has @discardableResult annotation, so it doesn’t have to annoy people. It’s just that the default is flipped to the more common case.

                    1. 1

                      They’re not inherently scoped. Yes the let _ = pattern does silence the “must use” warning, but you can do so globally with #![allow(unused_must_use)]. Here’s a Playground link showing this.

    5. 7

      The description of the bug here is a bit too vague for my taste. It’s not really clear that this is a simple “ignore checking for return value”, it sounds more like they weren’t doing in-depth validation of some complex value.

      E.g., this could have been

      getConfig(params) {
        if (condition) {
          return { mode: 1, b: bar }
        } else {
          return { mode: 2, c: baz }

      and then code later on crashed because it was expecting cfg.b to be defined. Now it seems not that clear to require to “validate the config to have the expected fields everywhere you fetch it”; the value returned by getConfig is always non-zero, and you should be able to rely on it satisfying whatever contract is agreed on. I’m sure there would have been mistakes made here, too, but it might just be a little bit more subtle than merely not checking for a NULL function return value.

      Anyway, I don’t doubt that there was a real cultural issue here, I’d just think the problem has to be described a little more concretely to actually pinpoint that.

    6. 7

      My colleagues thought I was wasting my time writing validation for configuration; known sane defaults, alerting when required values are missing and ensuring strict type checks.

      1. 4

        I’m curious — how do you demonstrate to them that you weren’t? Like, do they get to have moments where they try to Run The Thing and then your validator fails and they say “oh, I might’ve just wasted an hour if not for that”? This stuff is important but it can be hard to justify why it is when it’s working well.

        1. 2

          I have worked on far too many client/end user configured projects to trust any kind of input; normally you end up writing a config wizard which validates the input before storing the config.

          My demonstration was our production being brought down for a few hours by a deployment; at a quick glance of the error reports that began flooding in something written that week was at fault and we did quickly find what turned out to be an unrelated bug; rolling back to the previous release didn’t fix it which highlighted to me it was a configuration issue. Out deployment server allowed the configuration to be set but would only copy to the destination environments on the next deployment. A mistake during pasting something into the config brought down production and sent us on a short goose chase.

          The validation I wrote was essentially a more verbose version of nginx -t. It would cause the deployment process to quit early on, before production was actually updated giving a warning that human error had happened, again.

          Yes, it did happen again, but that time the error was caught before it ate into our .999 uptime SLA with our clients.

    7. 6

      I still get tripped up on trusting sources when by now, I should know better. The last time was a few days ago when I learned (the hard way) that client certificates don’t have to have a subject field (completely optional! Who knew?). The previous was trusting the Oligarchic Cell Phone Companies to send valid phone numbers (thank God they’re on trusted networks, right? Right? Where did everybody go?). The time before that was when a sentinel “don’t delete this value from the database” was deleted from the database (which caused a cascade failure of our service—what fun).

      Yeah, check everything at the I/O boundary.

    8. 4

      Feels like half this conversation isn’t talking about what the problem is actually. It’s probably the equivalent of this:

      # config.json
       "number_of_threads": 1
      def get_num_threads():
        config = json.loads("/my/config.json")
        return int(config.get('num_threads'))

      This is totally an API thing. Scala Map[A,B]#get takes an A and returns an Option[B]. So does Python, but you’ll only discover it at runtime if you aren’t familiar with the API. Java returns a B which in the world of Java means B | null. Python’s get returns B | None but B is less constrained. So yeah, when you use a tool you’ve got to know all the pieces of the tool. Tool which needs less memorization of its idiosyncrasies is better than other tool. Lets you operate at higher level.

    9. 3

      While it might be verbose, Go’s tooling/idioms were designed to stop this thought process.

      returnval, err := doThing()
      if err != nil {
          return err

      err has to be used to avoid a compiler error. And the error handling/return value checking is idiomatic to the point that it would look wrong without it. Furthermore, failing fast on error leads to minimized indentation (so it looks prettier by the Go aesthetic).

      1. 1

        err has to be used to avoid a compiler error.

        Wouldn’t this code be accepted?

        returnval, _ := doThing()
    10. 3

      Author carefully tries not to point fingers at the language, but “you should check all error conditions | but I don’t want to!” is a very old problem, and different languages have tried to solve it in various ways. I think we’ve had a meaningful progress since ALGOL 68 in this area:

      • Making handling less messy with less burdensome syntax for error propagation (like ? in Rust)
      • Propagating missing values instead of blowing up (like ?. in Kotlin/TypeScript)
      • Making checks mandatory with optional types. If the program blows up, it’s because you wrote force-unwrapping call that makes it so.
      • Removing null entirely. Things can only be missing if they’ve been explicitly designed to be optional. That makes “but it was supposed to be there!” less of an excuse.

      I haven’t seen a solution that fixes the problem entirely, but in my experience these features can reduce it from a common problem that can happen by accident, to exceptional cases which need either pretty complex data flow and/or code that is cutting corners in ways that are easy to notice in code reviews.

    11. 2

      That sort of attitude from developers is super depressing. I think there are multiple possible solutions, but the culture that leads to ignoring the problem is one that needs to be fixed. Really there’s two responses to finding an issue like this:

      1. Paper it over so that crash doesn’t happen (hey, sometimes this is the best you can do)
      2. Go one layer deeper and try to eliminate a whole class of similar bugs

      The problem with #1 is you get to deal with the same bug over and over.

      There are a few ways to do #2. You can use a language without the “billion dollar mistake”. For other languages you can at least validate a config at load time centrally before returning it for use which, from the author’s description, seems like the right thing to do here.

    12. 2

      There are two good approaches to that issue.

      The first one is to check and handle all possible unexpected results, and have tooling to ensure you do it.

      The second one is “let it crash”, famous in Erlang circles. Basically, when an error occurs, log it so you can debug it and panic the process and let a process manager restart it in a known state (“process” does not necessarily mean “OS process” here).

      In general, you would use both: very critical code (think DB transactions, etc) constitutes an “error kernel” which you write with the first approach, and all business code around that uses the second one.

    13. 2

      Idiomatic Elixir example:

      {:ok, file} = File.open("hello", [:write])

      Enforced checking by returning a tuple of {:ok, the-return-value} on success and {:error, something-else} otherwise.

      This can get ridiculous, though, so some functions are named with a ‘!’ suffix and just return the bare return value - but throw if there’s a problem.

    14. 2

      It’s about line of defense. The article drives home the point of defensive programming, but fails to talk about exactly where defensive programming is appropriate.

      You have to draw a line in the sand: On one side, you have untrusted input, where defensive programming is appropriate. The line itself is input validation, which turns untrusted input into errors and trusted input. Beyond this input validation, it is arguably offensive programming, aka. let it fail, that is appropriate – any failure here indicates breach of contract.

      The programmer said: “It’s not supposed to be like that, so we don’t need the checks.

      Correct – given that we are inside the line of defense! I don’t support the author’s automatic dismissal of this argument. The right question is where, along the whole information chain from the user to where the crash happened, was input validation supposed to happen.

    15. 2

      This is definitely one of those places where web dev has pushed the state of the art forward. I think in particular stuff like Django’s template library having the notion of “safe strings” (and other things always getting escaped).

      Like seriously, check your inputs! And your input should be a different shape from the result output (to avoid accidental usage).

      Rust helps with the ergonomics of this with Result + ?, and depending on your context you can make it nice.

      The way[1] to “force” return value checks is to instead provide a result on success, and an error on failure, so there’s no way to get the “thing you want” without checking code.

      [1]: not always applicable but…. applicable in so many places

      1. 8

        I would suggest that a number of communities (and possibly some languages) are perfect examples of this “it looks messy” symptom gone mad.

        Just look at all the arguments back and forth about putting semicolons in javascript. People proudly proclaim how knowledgeable they are about ASI rules, or how good their tooling is to do it for them, so that they can avoid having to see/type a semicolon in the source file, and I dunno, avoid making their intentions clear to anyone else reading it.

        IMO python isn’t much better, relying on invisible characters, so that people don’t have to bare the horrors of seeing curly braces, or [begin/def/func(tion)] / end keywords around logical blocks.

        If the people doing these things were writing a novel, they’d skip commas, semicolons, and full stops (periods for you Americans), etc in favour of either paragraph-as-a-sentence or some ridiculous “oh a double space means you should read it with a pause, as if a comma were there” type ridiculousness.

        Any discussion/argument/decision about syntax for a programming language that suggests/implies less characters = automatically better, is null and void. It’s a ridiculous assertion to make, and it’s depressing to see so many people who just go along with it when someone says this, rather than.. you know, thinking about it for a moment and then calling the person making the claim on their bullshit.

        1. 8

          IMO python isn’t much better, relying on invisible characters

          idk, indentation and newlines are pretty visible to me

          1. 4

            Seeing that it is indented isn’t enough, for any meaningful understanding of the code - you need to know that it is indented at a level that Python won’t error, and also know how many levels it’s indented, while also taking potentially mixed tabs vs spaces into account.

            1. 3

              Seeing that it is indented isn’t enough, for any meaningful understanding of the code

              I guess all python programmers just don’t understand what they’re doing then

              1. 5

                In life, some tasks we face are harder than others, for a variety of factors.

                If method A to achieve the task is harder than method B, that doesn’t mean that people using method A don’t understand what they’re doing.

                Or put another way: knowing what you’re doing doesn’t mean there isn’t an easier way to achieve the same thing.

                1. 4

                  a fair point, I see what you mean now, I will resolve to be less snarky in future

                2. 2

                  a curiosity still remains, is then C-style, in your opinion, the only legitimate syntax that can lend itself to deep introspective programming work?

                  1. 1

                    It’s not about legitimate or illegitimate. Nor am I saying that you can’t get a deep understanding of a codebase that e.g. (ab)uses ASI rules as in JS; adopts significant whitespace as in Python; relies on implicit curly braces in any number of languages; etc.

                    I’m saying that I believe those decisions make it objectively harder to quickly identify blocks of code and understand what will actually happen, and that the overarching drive behind those decisions is always “it looks nicer”.

                    I’m also not suggesting that this obsession with form over function is limited to programming languages. I’ve seen people in tropical Thailand, using softwood pine doors externally on houses.. suffer the consequence when the door swells so much in the humidity that it literally cannot be shut overnight (remember, external door) and then… buy more of them.

                    My point is more that the people making these decisions about code that “looks pretty” are ostensibly professionals. Clear, understandable code is a fine goal to have. Adopting a code-style guide to get consistency is even a fine goal to have. But increasing cognitive load to understand the code, and introducing unexpected behaviour due (otherwise known as bugs) because of misunderstanding the intent/result of said code, in the name of making it “pretty” would be a fireable offence in my company (as in, if you worked for me).

                    1. 3

                      It is absolutely, completely, utterly, 100%-beyond-any-possible-doubt, guaranteed, in a language which uses explicit block delimiters and which allows arbitrary whitespace at start-of-line, that any “professional” team will enforce code formatting guidelines which inevitably include use of indentation to make it easier to see where blocks begin/end.

                      Python simply says that if you’re going to do that you don’t need to also include explicit delimiters. And, as the old joke goes, Python happily supports nearly every block delimiter of every other language. Want to write in Python, but with C-style braces? OK:

                      if some_condition: # {
                      # }

                      Prefer Ruby? Here you go:

                      if some_condition:
                      # end

                      Of course, Python still expects you to get the indentation right, but you were enforcing that anyway.

                      (non-joking: I’ve been writing Python professionally for around 15 years now and I’ve never encountered someone who genuinely had trouble or introduced a bug due to Python’s use of indent/dedent as a block delimiter, but from the way it’s talked about in internet threads you’d think it was happening a million times a minute)

                      1. 1

                        Yes, they will likely have a style guide to dictate standard whitespace formatting.

                        But that’s just it: it’s about what works for them. Want to have all-tab indents, and “chop down” fluent method calls to another level of indentation? Go for it. Want to use tabs for indents and spaces for aligning multi-line property lists/method calls/etc? Go for it. Want to be a savage and use spaces for all indents, with 8 spaces per ‘level’? Go for it.

                    2. 2

                      “I believe” and “objectively” don’t fit in the same sentence; either you can bring data, or you’ve got a gut feeling, but you don’t get it both ways.

                      For what it’s worth, my gut feeling is that you are incorrect. Which has as much basis in fact as your assertion.

                      1. 1

                        I don’t think my meaning was clear.

                        It is my opinion that code is objectively (as in, not subjectively per-person doing the task) more complex to process in the scenarios described above.

                      2. 1

                        “I believe” and “objectively” don’t fit in the same sentence

                        “Objectively” in this context means “observable” or “measurable”. You can absolutely claim to believe something is observable or measurable without laying claim to having quantified observations or collected measurements ready to hand over.

                        Usually the primary value of such a claim lies in persuading someone with resources/opportunity to use those resources/that opportunity, so that such an observation or measurement can be made and reported.

                        1. 1

                          That’s when you’d deploy the word “observably” instead, which is more accurate and potentially supportable. Saying it’s objectively true is laying claim to a non-observation-dependent fact.

                          Which it isn’t.

              2. 4

                It boils down to cognitive load. I have a finite amount of mental attention that I can devote to anything. The more of my attention is focused on things that are not core to the problem that I’m solving, the less I have available for the real work. When the language makes me model ownership in my head so that I can get memory management right, rather than exposing it in the type system, that adds cognitive load. When the language doesn’t provide generic data structures so I need to escape the type system and track the real type in my head, that adds cognitive load. When the language provides an indenting model that doesn’t hit the fast paths in my visual cortex, that adds cognitive load.

                I evolved from animals that were hunted by creatures with symmetrical faces. My visual cortex is astonishingly good at spotting symmetry because most of my potential ancestors that were not good at this were eaten by predators with symmetrical faces before they could reproduce. There’s a reason that quotes, brackets, braces, and so on all come in symmetrical pairs: the human visual cortex can spot the symmetric pair very early on in the processing pipeline. Spotting the end of an indented block is harder than spotting the close-brace that goes with an open brace.

                Sure, I can do manual memory management in C, I can read Pascal code with begin and end blocks or Python code with no block delimiters, I can use things like UTHash or the 4BSD data structures and track what that void* actually represents in this context, but whenever I do I have less available attention available for the task that I actually want to achieve. The more I have to think about the tool and not about the task, the less useful the tool is.

                1. 4

                  see, this is where I disagree, parens that are opened and closed on the same line can make the argument for symmetry, but curly brackets that are positioned haphazardly don’t do it for me

                  I would entertain this idea if the people that proposed it also followed the Allman indentation style, but they don’t

                  as a psychologist, I’ll have to see more proof about the supposed symmetric effect than just anecdote

                  fact of the matter is, you were trained to recognize this style of formatting and now you’re comfortable with it. deviations from it are not somehow scientifically incorrect, just something you have not trained yourself to see

                  1. 2

                    I agree with @formerly_a_trickster that @david_chisnall is post hoc rationalizing what “feels easy”.

                    I think the issue is that (at least since the if/while structured programming trends started in the 1960s) code is 2D (abstractions of computer memory/parsing are 1D until 2D is re-established in abstract syntax trees). Using indentation as syntax embraces this 2 dimensionality rather than it being “only” a “redundant structural cue”. 2D vision was uncontentiously at least as important to our ancestors as symmetry, but probably much more so.

                    The real issue is how much one prefers/needs/wants/is trained to like/filter/apply “redundant structural cues” - both indentation and bracket characters and punctuation or some subset. Some people find redundancy helpful. Others find it distracting. That’s about it. Old time Lisp hackers will always defend all the parenthesis bracketing by “I read by indentation anyway”.

                    (EDIT: https://en.wikipedia.org/wiki/Indentation_style has many examples of indentation style. I feel like most people violently opposed to Python/Nim/etc.-like whitespace for block structure are secretly much more interested in the vertical white space - a non-syntactic thing in both domains - or else one would hear more often the rebuttal “just format C-like bracketing with Lisp style indentation”.)

                    1. 2

                      “Color” is another divisive distracting-to-some but delightful-to-others “redundant cue” in the modern era - both in code and in terminal command-line utils as per https://lobste.rs/s/2mxwdm/rewritten_rust_modern_alternatives (and italic/bold/underline/upper/lowercase and so on). Many people (something like 8% of white men!) are red-green color-blind/impaired.

                      Population diversity of literal insensitivity to a stimulus makes for much stronger biological arguments for UIs than just-so evolution stories (and programming languages are just UIs to express computational activity). Yet, for other people color is not a distraction but really makes syntax/important things pop off the screen, as exhibited in that other thread.

                      Predator-wise, we are all more sensitive to motion than anything else, but animated dancing Paper Clips like on Microsoft products probably take attention-getting too far for programming. I think arguing anything here is “objective” or “biological” is likely a mistake.

                      The question (including the original Rachel bug) is how to “game human attention” in just the right way to reduce trouble - the right amount of detail in the right places. That question is extremely vulnerable to “I know how it works for me” biases and projecting those biases onto others.

                  2. 2

                    There were studies in the ‘80s on comprehension rates between begin / end and { }, but the ones I saw did all use variants Allman style - I would expect you to lose the symmetry recognition if they’re not vertically aligned.

                    1. 1

                      thanks interesting, thanks for pointing that out, I’ll look out for them

                      1. 1

                        There is Floyd1983. It doesn’t study indentation, but “symmetric” operators vs. ALL CAPS words that match the opening construct (ENDIF, ENDWHILE, etc.). This comports with the commonly seen annotations #else //START COND and #endif //START COND in C preprocessor logic – notably often not indented even though it can be! – maybe because there are two overlaid logical structures going on.

                        Then there is also Bauer2019 which is about “amount of indentation” - 2, 4, 8 terminal columns, etc. They find no statistically significant variations in all 6 of their research questions. So, again it’s likely a mistake to argue anything is objective or biologically rooted.

                2. 1

                  I evolved from animals

                  I didn’t. This must be the reason why I’m fine with indentation-based syntax ;-)

              3. 3

                Can confirm: I am python programmer and I have no idea what I’m doing.

            2. 1

              at a level that Python won’t error, and also know how many levels it’s indented, while also taking potentially mixed tabs vs spaces into account.

              Do you? Python will yell pretty loudly if something like that is wrong. You can just get back to reading code and assume if the computer yells it will be a quick fix. After all, why babysit the compiler?

        2. 6

          Any discussion/argument/decision about syntax for a programming language that suggests/implies less characters = automatically better, is null and void.

          APL intensifies

          1. 2

            is null and void.

            = 0∧ø

            there, automatically better!

        3. 1

          I mean the Python thing resolves the semicolon thing by making whitespace important. Also resolves “weird indentation” for many classes of weird layouts…..

          Going between Rust and Python, having to deal with the mix of parens and braces in Rust actually makes some code harder to deal with, especially in messier conditions/matches. There is value in “less characters means more functions can fit on one screen”, though you can be a bit principled in it (like shortening everything to one char like K is something else).

          If aesthetics didn’t matter we wouldn’t have invented word processors or… the paragraph. I definitely feel you on the hidden semicolon stuff, though. There are good ways to do things, and then there’s really doing things in a less principled manner.

    16. 1

      Good post and discussion.

      This doesn’t handle memory outside the system. This doesn’t handle IO, network, database etc. You cannot avoid all runtime errors so there are no guarantees. You cannot invent a language or a machine/cpu/fpga that doesn’t not have some kind of runtime errors. Yes, all of us with our different machines and languages all need to do error handling, testing and monitoring and much more than that. Types and compilers are helpful but many people think that there is some kind of magic and then are surprised when it fails. This also doesn’t mean “don’t use types and compilers”. You need a mix of tools but also you need to understand what will never be possible and why there is no silver bullet.

      • If I add typescript I don’t need tests. No.
      • My compiler will catch all my mistakes. No.
      • The new version of my language/framework will fix this. No.
      • The next trendy language will have no errors of any kind. No.

      What works (probably) but we (whoever that is, definitely includes me) don’t do enough of is:

      • Feedback on pain points (money/time)
      • Add tooling
      • Take time to teach to open minds (not a time issue)
      • Change culture (but how and with what authority)

      Maybe some approaches for the original story:

      Add a test to simulate the country that is having problems. It fails. Make the test pass. Decide if there is a refactoring opportunity. Refactor (or not). Commit the tests and the fix. The repo is stronger. There’s a monitoring angle here too but to me monitoring is a different kind of testing - it’s the same test just continuous.

      If the problem is all I/O, then you can’t control the memory so all your types and compilers have no power. You can let it crash but it still doesn’t work. If you want to avoid or know breakage there is contract testing which is pretty neat in this era of mashups and cloud. See pact.io, it’s weirder but easier than it seems. You need to believe pact a bit though, it’s slightly different than maybe what you’ve seen. It’s a different layer of tests. Ultimately, this is probably culture but I like introducing tools as enforcement/codification for cultural values.

    17. 1

      Obviously, test early, test often. Everything that can fail, should have a pre- and post-condition check. Yes, that means:

      const result = somecall(x);
      if ( ! resultValid(result)) {
          throw new Error(...);

      And somecall should validate all its parameters the same way. Ideally you’d trust somecall to validate its result, but in reality it’s some library you don’t own and can’t trust.

      Often in production code, you go along thinking it can’t fail, and then it does, and then you instrument the entire section with more tests.

      I haven’t heard anyone say “it’s messy” before, but “we’re time-constrained”, or “we didn’t think it could fail” are very common. And there’s a subset of soi-disant programmers who will not write tests, no matter what you do, and you either code around them, treat them like faulty hardware, or fire them.