1. 6
  1.  

  2. 7

    Testify committer here. I haven’t done much work on Testify in a couple years other than occasionally boosting PRs, but, a few years ago, I was involved in a bit of cleanup on the project and the codegen was pretty annoying. I don’t recall the exact issues, but basically it would only run correctly in a very specific environment and different environments would produce different results. In some cases there had been PRs merged without re-running the codegen, which obviously caused weird issues. That isn’t to say that I disagree with it, just that the complexity involved is a real consideration.

    1. 3

      In some cases there had been PRs merged without re-running the codegen, which obviously caused weird issues.

      Golden rule of codegen: it should be impossible to check in stale code. Two ways to achieve that:

      • don’t check-in generated code, rely on build system to guarantee it is run as a part of the build
      • check-in generated code, and add a test to verify freshness
      1. 1

        Two ways to achieve that:

        A third way:

        • check-in generated code, and make your build system automatically update it in the source tree as part of the build

        With this approach, the only way to end up with stale generated code is to commit a change without building/testing it. But nobody does that, right?

        1. 2

          With this approach, the only way to end up with stale generated code is to commit a change without building/testing it. But nobody does that, right?

          Well, I totally do commit&push without local testing, relying on CI to run the test asynchronously. So I think if we literally do just what you are suggesting, I’d sneak in stale code. So, the build system should also fail build if the generated code is stale and it’s run on CI.

          1. 1

            Don’t you at least build locally to make sure there are no syntax errors, etc? If yes, then that would be enough.

            1. 3

              Most of the times :) . In general, I feel that “server side” checks work much better for this kinds of things. That is, I in general try to push as much properties to CI, to enforce they actually hold for the master branch.

      2. 1

        My limited experience with code generation makes me think that it’s a good practice to consign it to a dedicated package, that only or mostly contains generated code. Mixed hand-written and generated code will always make it easy (or tempting) to forget to rerun code generation.

      3. 6

        Okay, but this is because testify has a bad API. My be testing library doesn’t need a duplicate assert package because I just take the *testing.T and wrap it in be.Relaxed(t) when I want the test to keep chugging along.

        1. 2

          That looks simple, doesn’t drag in any dependencies, and uses generics for the Equal(). Nice!

          1. 1

            TY

        2. 2

          I write Go at work, and we’re moving to codegen-ing quite a bit of our everyday stuff. Now, I wouldn’t call myself a Go fan, in fact I’m fairly frustrated by it on the regular. But, there is something to be said about a very simple semantic core and leaning on codegen to handle the expressivity. I just wish Go had macros built in, so you didn’t need to pick your codegen tool and it would all be AST-based instead of string-based.

          I’m of the opinion recently that we’re at a local maximum with PL syntax expressivity. I do not see what could be done to really change things all that much. I’ve used all of the state of the art type systems - Scala, F*, OCaml, Haskell, Idris, you name it. None of them affect the raw amount of code that you have to write all that much. Probably the only thing that affects it is not having types at all, and I don’t even think that is all that expressive across the whole system. There is clear essential complexity with the level of logic that we’re writing.

          So I’m very open to codegen and macros recently. Sure, they can consist of total black magic and be hard to debug and fully understand. But, I don’t think we have an alternative. There’s an upper limit on how much code a human can produce in a given timespan, and more importantly there’s a limit on the surface area of how much a human can understand enough to successfully modify code correctly. There’s some clear information theoretic limits at play, and no I don’t think that we’re one beautiful PL feature away from getting past those limits in a meaningful way, and yes I’m proposing that the answer is macros and / or codegen to get around it

          1. 3

            A thing to keep in mind is that none of the languages you’ve mentioned are trying to make it so that you write less code, but to ensure certain classes of errors are caught sooner and so that code is more likely to be “obviously correct”. There’s a limit to how much code someone can produce, but you can increase how much of their time isn’t wasted on silly things that the compiler can/should catch.

            Codegen does help, as do macros, but only when they’re hygienic. Generics help here too, as they’re a form of typesafe codegen.

            1. 1

              That’s true, I didn’t mean to focus on type systems exactly, but meant that these are the languages with the most advanced features overall which should translate to programmer productivity in some way. I fully agree that “productivity” has two aspects, raw surface area but also manageability of that surface area. Types / advanced PL features help with the manageability, but the surface area magnitude is what i’m most concerned about now.

            2. 2

              I’ve used all of the state of the art type systems - Scala, F*, OCaml, Haskell, Idris, you name it. None of them affect the raw amount of code that you have to write all that much.

              I don’t completely disagree with your larger point, but I think you’re overstating the case with respect to the amount of code.

              For example, even with the same language, it is not uncommon to see a 2-3x difference in the amount of code needed depending on who writes it. And I’m not just talking about golf tricks – I mean the difference between two fully formed solutions whose goal was readability and correctness.

              On top of that, while OCaml and Haskell, say, might be close in expressiveness, there is a substantial average reduction in code between Go and Haskell, say. At least 2x, and it can be greater. Whether this translates to less time spent overall is a separate question, but there is no question that some languages are substantially more concise than others.

              1. 2

                I have no quantitative info about this, this is definitely just my current intuition, and I’d actually really like to get some more quantitative data here. So I can’t disagree with you, because I have nothing to base it on other than feelings.

                My feelings, though, are that of course you’re right, but I don’t think that means what I’m saying was overstated. You can get marginal improvements by “just writing the code better” and “choosing a more expressive language,” but I still don’t think it’s good enough. We need a much higher level of abstraction.

            3. 2

              Pretty much all the problems of codegen (especially those intrinsic to repeated-codegen, as opposed to one-off template generators) are negated by a language just having a good (or even just adequate) macro system. One of many ways Go’s Luddite approach to language design is incredibly frustrating.

              1. 1

                I’ve always been susceptible to the argument that most programming languages have perfectly good operators to express equality, and that writing foo == bar is shorter and more legible than assert.Equal(foo, bar).

                For me, the real value of testify is in the ObjectsAreEqual and diff functions, which are probably tricky, or at least tedious to implement.

                But using code generation for a unit test library does make you wonder; a lot of (not all) unit tests are tedious to write, straightforward, just comparing input to expected output. Can’t we use code generation to get us out of this chore?

                1. 4

                  For me, the real value of testify is in the ObjectsAreEqual and diff functions, which are probably tricky, or at least tedious to implement.

                  I use go-cmp for that. It’s also recommended in the official https://github.com/golang/go/wiki/TestComments.