1. 16
    1. 2

      I’ve read through this material multiple times, but I haven’t put it into practice. James has published a few videos that dive into the details:

      1. 2

        How’s much of this applies to languages that aren’t class-based? Most of the advice seems singularly focused on manipulating classes and class hierarchies into specific shapes or workflows.

        1. 6

          The important bits (don’t call deps directly, wrap them; allow for “null” implementation of the wrapper; even so, minimize the extent of the code which needs dependency; etc etc) are 100% language agnostic. Though, it does feel like the same could have been conveyed more efficiently (though not necessary more convincingly) by avoiding some jargon.

        2. 1

          I’ve always loved this, it seems to match how I think about testing. There’s one part that I’ve always been stuck on though, and I even emailed the author about this. The part about collaborator-based isolation. The idea is, to avoid breaking a ton of tests when you change an implementation of something and a bunch of other services depend on that logic, you just call that something in the expectation i.e.:

          // Example test
          it("includes the address in the header when reporting on one address", () => {
            // Instantiate the unit under test and its dependency
            const address = Address.createTestInstance();                 // Parameterless Instantiation
            const { report } = createReport({ addresses: [ address ] });  // Signature Shielding
          
            // Define the expected result using the dependency
            const expected = "Inventory Report for " + address.renderAsOneLine();
          
            // Run the production code and make the assertion
            assert.equal(report.renderHeader(), expected);
          });
          

          The address.renderAsOneLine(); call is hiding the implementation details of how addresses are rendered, so if that changes this test will continue to pass. The thing is - now the test is coupled to the API of the collaborator though, which I feel is equal to the amount of coupling that stubs and mocks give in general. It means that the structure of the dependency graph is referenced in the tests, so that structure is still hard to change.

          I haven’t found an answer to this by the way, which is why I mostly experiment with generative and model-based testing. But I wonder if anyone has any insight there, because I would definitely use these patterns at work if so.

          1. 2

            Hi AMW, author here. I don’t remember seeing your email, so if I didn’t respond, feel free to send it to me again.

            I share your concerns about Collaborator-Based Isolation. It’s something I use very selectively. Usually, I just put in the actual expected result, because I want to know if the behavior of dependencies changes. But there are some cases where the behavior of dependencies doesn’t matter, and it’s likely to change, so I use Collaborator-Based Isolation to protect me from those changes.

            But you’re right, doing that locks in my relationships, although not as badly as mocks do, so it’s not something to do everywhere.

            I’m new to Lobsters, so I’m not sure if I’ll get notified if you respond to this. I’ll try to remember to check back. But emails are also welcome.

            1. 1

              Ah yea I don’t get email notifications for comments either.

              You did respond - this was over a year ago for sure. That’s basically the conclusion I left off with: of course calling collaborators directly couples you to them. This isn’t necessarily a bad thing, just something to keep in mind. I do see it as equivalent coupling to mocks though - you’re coupled to the interface and presence of that collaborator. If either change, the test has to be updated.

              1. 1

                Yes, agreed, that’s why I use it sparingly. I’ve de-emphasized it in the article since you posted, too, and I plan to take it out of the initial example.

                I think the tradeoff is a judgment call: will I be changing behavior more often, or changing relationships more often? For me, I’m almost always changing relationships more often, so I tend to just hardcode the expected response. I actually like seeing tests fail when behavior changes, even if it means I have to do some busywork, because it reminds me of the scope of my changes and gives me confidence that the tests are doing their job.

                1. 1

                  I agree - this goes back to sociable vs. solitary tests in general, and I agree with Martin Fowler and you in that I’ve never seen it as a huge deal when multiple tests fail based on a change. That only gives more signal into what’s wrong, and then when it gets fixed I have even more confidence.