1. 21
  1.  

  2. 5

    There was exactly one moment when I was willing to consider the performance of that code, and it was the moment my shell command stuttered.

    If you can dismiss the issue at your whim, maybe the performance of the code wasn’t that relevant in the first place? I have trouble understanding this. If my browser decides to do some garbage collection right when I’m compiling, that’s likely to have vastly more impact on compilation speed than one unit test that’s slightly slower. I can make a change that adds 50ms to a 100ms function. Or I can make a change that turns a 0.1ms function into a 1ms function. Undetectable using this method, but a much worse regression.

    Of course, Go has a pretty good built-in system (go bench) for collecting objective, reproducible data about performance regressions. But you have to decide upfront that code needs to be benchmarked, and how it needs to be done.

    I know that for most software, desired performance isn’t specified, so code has a natural tendency to get slower over time. But there has to be a better way to stop this than relying on compiler interactivity.

    1. 8

      This author either doesn’t know about Niklaus Wirth or forgot to mention him. Wirth’s main metric for assessing language complexity was how fast the compiler compiles in general and compiles itself. He ditched anything that slowed that down. The resulting compilers could never be as fast as C/C++. However, Oberon-2’s fast development pace with safe code was a major inspiration for Go. If anything, Wirth’s legacy went mainstream when someone finally did something non-academic with it.

      Author makes another mistake which is widespread: that fast-moving teams need languages with fast compilers. Sort of a half-truth. The alternative I push is a combo of REPL or fast compiler with ultra-optimizing compiler. Most of the iteration happens with the fast one. Everything that seems kind of finished gets compiled in background or on build server with more optimization, followed by lots of testing. It gets integrated into the new, compile-quick builds. Plenty of caching and parallelization in build system, too.

      1. 6

        I use ghcid which only type-checks by default and doesn’t fully compile. Then I run tests, which compiles without optimisation. Then I ship with full optimisation!

        1. 2

          Wise man. I might cite you as an example in near future.

          1. 2

            I do something similar with Rust: first I cargo check as I’m developing, then I cargo test when the typechecker no longer has anything to complain about, and finally I cargo build --release.