1. 60
    1. 12

      I very strongly agree with this. To the point in fact that I consider the use of a mocking framework to be a code smell most of the time and I pay much closer attention to tests that use them pushing back against any mocking that assumes too much about the internal implmenentation of the class under test.

      A a test that is using a Fake is much easier to verify than a test using a mocking framework. You can generally assume that the Fake works properly and only evaluate the test logic itself. For mocking frameworks you are evaluating both the mock “script” as well as the test logic which raises the review burden.

      1. 9

        I tend to agree. In fact, I have even been on teams that shipped fakes as part of our library so that our consumers could test their code without having to mock ours.

        Another thought, if you’re only mocking one method on a class, then maybe the code you’re testing should just accept a function instead of an instance of the class (at least in languages with first-class functions, and especially languages that allow tear-offs). Then you don’t need a mock, you can use an anonymous function, eliminating one layer of abstract / obfuscation in the test.

        1. 6

          You sir, are my favorite kind of library author. When a library provides fakes it makes development with that library so much easier to use properly.

      2. 1

        How do you feel about stubs, then?

        1. 3

          Stubs are a judgement call I think. In many cases if you need a stub that does nothing then it says more about the class you are testing than it does about the usefullness of stubs. You might need it to get your test code to work but it’s probably an indication that the class you are testing needs a refactor.

          I cut as many corners as the next guy in order to get something out the door but that doesn’t mean I don’t recognize I’m cutting corners.

          1. 2

            I cut as many corners as the next guy in order to get something out the door but that doesn’t mean I don’t recognize I’m cutting corners.

            Wisdom, right here.

    2. 8

      Been thinking about this for a while. I’m a relative newcomer to testing, but:

      I look at most mocks and think “Exactly what are we testing here?” - most of the time people seem to mock out various functionality but then not actually TEST anything!

      So you’ve written 300 lines of code to create MockPipelineConnectionFrobnicator, but what does that prove other than you know how to write mocks?

      1. 2

        This is something I had to drill my employees about: if you create a mock, you better write an assertion against that mock.

    3. 6

      In functional programming, we don’t have these mocking libraries, but there’s a bunch of confusion when I say things like “we don’t use mocks” - as the article points out, the term “mocking” now includes things like “fakes” to most people I talk to.

      There’s precedence in the term “fake” but I don’t even like that, there’s nothing fake about the implementations, they’re just different to production.

      1. 1

        For example crux db has an in memory database, much like sqlite3 does. You can use that database to run tests, even if you would never use it that way in production.

    4. 4

      It appears to me that fakes are substantially harder to write than mocks, so you need a really complex project/functionality to justify the cost of implementing (and meta-testing) a fake. Of course, fakes are preferable to mocks, but when is it worth creating one? We add dependencies to reduce the complexity of our code, so fakes must be worth the effort.

      This additional benefit comes from the fact that well-designed fakes are inherently reusable, which helps a lot with maintainability.

      While this is true for the IBlobStorage example interface, real-world dependencies can be much more complex, e.g. interactive interfaces, database engines, cryptography helpers, system calls, etc.

    5. 3

      Verified fakes:

      A very neat idea that I heard from Adam Dangoor, a former colleague. You have to write tests for the fake itself. The fake should mimic what the real thing is standing in for does. Therefore, take a subset of the tests you write against the fake and run them against the real service too sometimes to verify that the fake and real implementations match. Often you do this less often and in less detail because the real system may be super expensive to interact with (e.g. a SaaS backend that charges entire dollars per API request to it).

      Fakes as local implementations:

      IME it’s very nice to have a fake that persists data to the local filesystem (without bothering to do any of the usual slow fsync() calls for data integrity because this is just test stuff) and is complete enough for end to end tests to work. That way you can use it during development too. Data being dumped into local flat files makes it helpfully easy to inspect just by looking around with find and cat.

      Ossification can be good:

      FWIW (this is tangential, I don’t disagree with the main point at all) one example of ossification through testing details over interfaces is given here that I don’t think it’s necessarily a bad idea:

      await blobStorage.UploadFileAsync("docs/test.txt", documentStream);
          //          still implementation-aware --^
      

      The reason I think ossifying this may be very reasonable is that if you have this system deployed & actively used in production then you also have thousands of documents in production saved under that docs/ prefix. If someone changes it to documents/ then the test which doesn’t hit this implementation detail will not fail, but all your documents in production might just go missing because the software is no longer looking for them in the right place.

      In general I think it’s a good plan to have samples of what the existing data in production looks like copy-pasted into tests in order to make sure the software stays compatible with the data that people have spent lots of time and money inputting to it.

      1. 2

        This idea answers a question I was wondering about this, which is, “great now we have to test our fake to make it operates correctly, how do we do that?”

        I also wonder if it would be practical to formally verify the behavior of fakes, with the idea that it has invariants similar to/the same as the real thing, but is easier to prove.

        1. 1

          formally verify the behavior of fakes

          Writing down the behaviour you expect the fake to have sounds reasonable. I dunno how much that helps it be correct though? The correct behaviour for a fake is to match the real system. If I misunderstand what the real system does, I’ll be putting my misunderstandings into both the formal spec for the fake and its implementation.

          I’m assuming that proving the real system matches my model is intractable, because it’s huge or maybe it’s a SaaS product out of my control. I’m also assuming that it’s possible to run empirical tests against the real system.

      2. 1

        Agreed. An approach I take is to use table-driven tests and run the tests over both the fake and real implementations; a matrix of [I, T] where I is the implementation and T is the test. Works really well.

    6. 2

      This is really interesting. If you had asked me, I would’ve rated test double techniques like

      stubs > mocks > fakes

      But now I’m not sure. Honestly, I kind of hate them all and it makes we want to pick up my ball and go home (to just writing end-to-end tests against a real database instance).

      Having to reimplement your class’s logic in a fake just feels like an exercise in frustration. Did you account for all corner cases in both implementations? Does your fake have bugs, or does your test code that uses it have a bug? Do you have to reimplement all of MySQL’s uniqueness constraints that you put on your tables in your fake?

      I do agree with the criticism of mocks. Testing how many times a method is called is totally backwards and almost useless.

      1. 1

        Yes, I’m also a bit puzzled here.

        Let’s use the example of the object storage. In the author’s view is the difference that with a fake you can upload files A and B and download files A and B whereas with a mock you’d upload any file to /dev/null and any download would download a static file C? I’m not really sure there’s a huge win here. Maybe I’ve written mocks that were more of a fake and I’ve used fakes that were only glorified mocks, but thinking about how for example a fake MySQL would look like… No, I think I’ll stick to using sqlite with the same DBAL, including the problems it has.

        This sounds nice in theory, but if you’re interfacing with a lot of external systems it sounds a bit like spending more time on writing fakes than writing tests or code that is to be tested…

        1. 1

          When interacting with external systems, I reach for techs like VCR or contract tests.

      2. 1

        stubs > mocks > fakes

        Also > simulators > emulators. Sometimes (not often probably) an interface is so well designed / small that just writing a simulating implementation of it is a fine option to polluting your application code with details about it. But I’m perhaps treading over the border to integration/system testing territory here.

    7. 2

      Here is my talk about fakes/mocks, dual tests, associated issues and our solution: https://youtu.be/CzpvjkUukAs

      And here is more: https://blog.7mind.io/constructive-test-taxonomy.html

    8. 2

      Test code should be written like production code. I see teams pursue qualities like strong type safety for the application, discard those qualities when writing unit tests, then have a bad time when renaming something breaks tests at runtime instead of compile time. Why do we even have that lever?

      Conversely, production code should be written to be tested. If you have to reach for framework to plant a mock dependency where the test subject will implicitly find it, maybe some dependency inversion is in order so you can just pass it in with your other arguments.

    9. 2

      This is one of the better testing articles I’ve read. Great post!

      One of my favorite things about Go is its interfaces and how much they encourage testing like this. I strongly dislike mocking, but couldn’t ever put my thoughts into words like this blog post did. Testing the number of times a function was called, for example, always felt like a really trivial exercise that didn’t tell me anything about the underlying code or how it was behaving.

    10. 1

      This was my position as well since quite long ago. Watching the gymnastics people do with mocks and even mock frameworks instead of techniques like described here seemed like I was deeply failing to understand something.

      Perhaps some day I’ll be old and/or experienced enough to trust my instincts. I hope they’ll keep being (mostly) correct.

    11. 1

      Alternative view: fakes, mocks, and stubs can be replaced with custom “providers” for production code. Instead of writing a fake class just to switch from an in-memory implementation to a persistent filesystem implementation, we can create a notion of “providers” that can be switched out during testing.

      For example, assume you have a class that depends on some external API. Write two classes with the same shape:

      • One connects to your actual API.
      • The other is a fake that looks like the API.

      Maybe a more clear point: fakes are good, and you should put effort into faking the absolute minimum. Fake the external API, not the entire class that depends on the external API.

    12. 1

      We’ve put a lot of effort into our test environment and have landed in a pretty good place. In essence, each test gets its own clean database instance and we test via the public API. The effort has gone into making this performant enough to be useful while still having a fully functional DB (with stored procs, reference data, pre-loaded test data etc), plus a bunch of helper methods that perform common sets of API calls.

      I have been a unit testing proponent for a long time, but I’ve come around to the idea that targeted integration tests (especially if they’re exercising the API) have a far higher ROI. There is a maintenance overhead with integration tests because they overlap far more than unit tests. However, you save a ton of time creating mocks/fakes - we have a couple of global fakes of external services but individual tests almost never need to do any mocking or faking.

      There are some areas of the system that have a zero tolerance for errors, and we unit test those (as well as integration test). But for the vast majority of the system, our clients are happy to wear the occasional bug if it increases the speed of delivery.

    13. 1

      Can’t argue with this, although I never bothered to give the difference voice with different terms. Problem is you’ll be (rightly) tempted to have automated tests that compare your fakes to the real thing, and then it’s fakes all the way down until Cthulu.