1. 6

If you made a list of “If I knew then (when I got started) what I knew now” lessons, what would you include?

  1.  

  2. 7

    I have a youtube list called “mind blowing dev” and on the top of that list (in my mind) is a 2013 talk entitled Magic Tricks of Testing by Sandi Metz. The examples are in Ruby but it hardly has anything to do with Ruby. I’ve taken this idea from team to team from project to project and had nothing but success with it. It’s no silver bullet but oh my. I even internalized its lessons at some point but still did not completely grok it. I think I’m on my second internalizing of what she was teaching in this talk.

    The spaceship. Man. This changed my life. I was just thinking about this today, minutes before reading this thread.

    https://youtu.be/URSWYvyc42M?t=327

    Other bits:

    1. Controller tests just test HTTP
    2. The testing pyramid by Martin Fowler
    3. Testing CLIs is super easy if you separate your core app from a class that handles args and invokes your core app
    4. Selenium is slow because it’s the wrong kind of thing. It will never be fast because it’s the wrong kind of thing.

    I had typed a massive response to this but I’m going to make it a blog post later.

    1. 1

      Somewhat tangential: a lot of Sandi’s talks are great. Would you mind sharing this youtube list with us?

      1. 2

        Sure thing. https://www.youtube.com/playlist?list=PLlP6fPfZXKAZyfb0X2ZmJIBWkTLzV3mdn

        I wish I could annotate each one. I think the only one I want to point out is the Google tech talk on Git where Linus asks Googlers about their subversion habits. I found this telling because the assumption that everyone in that room is probably very smart / talented and even they didn’t svn branch/merge. :)

        So, not all these videos are equal in mind-blowing-ness. And a lot of them are well-known. But hopefully it’s interesting.

    2. 4

      That tests are code and should be treated with the same care as code, rather than “just tests”.

      1. 2

        That most software testing is often overrated and can give engineers a false sense of security about their project trajectory.

        1. 1

          I disagree, when you work with a team of a certain size, tests are very important to keep future programmers from repeating old bugs. Of course, as an individual or as a team, you might need to remember the bugs, but when your team often welcomes new people, micromanaging known bugs can become hard.

          It does provide a false sense of security in that it doesn’t prevent new bugs, but the sense of security that old bugs have a much smaller chance of coming back is definitely real and is the reason why testing is vital in certain projects.

        2. 2

          Two things:

          1. The appropriate code style for tests has a deep difference from that for code that’s shipped, even if the indentation rules are the same. The reason: If tests break (as opposed to tests that work and shows that code is broken), only the development team is bothered, while if the shipping/deployed code breaks, customers are bothered. Therefore, many (but not all) kinds of laxness are acceptable in tests that would be out of the question in deployed code. It’s better to write more tests and get higher coverage than to spend time following the strict coding rules that are appropriate for shipped code.

          2. How to shape the code and interfaces such that the number of untested paths is small. Can’t really say much about it, but it is a skill that one can practise and learn.

          1. 1

            IME, #2 is one of those things you get for free if you’re following the Single Responsibility Principle.

            1. 1

              It took me only two seconds to think of something I’ve written where I didn’t get that for free from the SRP.

              In that case, I improved testability by not implementing some very simple optimisations. These optimisations would have increased the number of code paths to test (and doubled the code’s performance, maybe much more).

              1. 1

                I of course should’ve said that it’s something I find I mostly get for free from following the SRP.

                1. 1

                  Did I find an exception straight away by luck, or because I’m oh so brilliant? I’d love to be either lucky or brilliant. And rich too, and favoured by the ladies.

                  The strongest factor in my experience is a certain sinking feeling: “oh ████████ this will be hard to test” followed by either a design/implementation change or some procrastination, or both. But how can one nearn to feel that sinking feeling?

          2. 2

            I think what I have long ignored is to learn the tools. Understand them. Then you’ll understand why are you doing all of this in the first place. Looking at the example and trying to repeat it on your own code is not a good strategy to write tests quickly.

            Nowadays I watch videos (plural) on any new tech that I need to pick up.

            1. 1
              • People usually don’t write tests spontaneously, especially when they’re still learning the craft. You want as many as possible of the following, and probably a lot more things besides. Anything to make the tests run and break, or run and point out lack of coverage, before anybody gets a chance to even ask “should we spend time on testing or does this look OK”.

                • Make ‘write a toy function, write tests for it, and run the tests’ part of welcoming a new member to the project.
                • A tutorial/HOWTO/explanation of how to write tests.
                • Again as part of the welcoming process: set up local pre-commit hooks that run the test suite, or at least the fast part of it.
                • Test coverage reporting (1), (2), (3), so you can point at uncovered lines to show people why their tests are/aren’t good enough.
                • Automated testing of the central repository.
                • …and surely many things more.
              • For most data analysis (‘data science’) code, you’ll write a lot of one-off data wrangling functions – it’s the outcome rather than the code that you will reuse. Here, putting assertions (of pre- and postconditions) directly in the functions is generally than a separate test suite. Faster to write, more visible, doesn’t require fixtures, and you don’t have to spend time dreaming up every way the input data could be malformed.

              • Test suites make it easier to confidently change code. Test fixtures (data and objects that exemplify common or important inputs or resources) make it a lot faster and nicer to write tests. Time spent on creating them, and making them easy to reuse, repays itself.