1. 43
    1. 22

      Similarly: See your test fail first before you make changes to make it pass (or you risk having a false positive pass, where your change didn’t actually do anything meaningful).

      1. 6

        This is great advice! I’m often so psyched to get going that I just jump in before making sure that my work is resting on a solid foundation. It’s definitely worth the time to make sure that your baseline is solid before rolling up your sleeves.

        1. 4

          This is indeed great advice. It extends to service configuration also!

          In my first job (~20 years ago now) the new sysadmin had to make a change to the Sendmail config file. Little did he know that the previous sysadmin changed sendmail.cf to no longer work after starting the service. The new sysadmin didn’t restart the service before making their config change and this unexpectedly broke email for the (small) company. There was, alas, no source control.

          1. 3

            Side note: interactive programming (e.g. Lisp via SLIME) leads very naturally to this sort of approach. You’re actually swimming upstream to try it the other way around.

            1. 3

              The reverse is also true: if you can reproduce a complex bug, and you’re testing to see what makes it better or worse, periodically return to the original stimuli and make sure you haven’t lost the reproduce case. This is especially true when you can’t completely control the environment.

              1. 2

                Or make your first change be “adding a printf()” rather than attempting to make the real change you wanted.

                1. 3

                  While having trouble with my local build/deploy cycle, I added an “alert(1201)” call to the main page of an app, and then repeatedly updated the number to the current time as I made changes.

                2. 1

                  Ie. what test-driven development calls green-red-green pattern. Working version to broken version to working version.

                  1. 1

                    tldr: get a control group.

                    1. 1

                      There is no rule so certain as to always apply.

                      The cost of finding out that the problem was preexisting in your code base and build system can be measured. Let’s assume this type of problem is on your radar as possible, so you try, get errors, read code, and then try to compile your original code as a baseline. Figure it costs you about .75 hours. Multiply by 6. Demming always says to multiply by 6. So it costs 4.5 hours each time this happens.

                      Detection, if you start each problem with a clean build and smoke test, probably costs about 15 minutes.

                      How often does it happen? When you last ran across an issue like this, did you spend the hours necssary to clean up the build or did you note the problems seem benign and move on? If get chatty builds one time in three, dectection costs (1 / 4 hour to check) / (1 in 3 check is a problem) = 3 /4 hour per time you find the problem this way. If you usually have clean builds, so a chatty build is a one in fourty thing, (1/4)/(1/40) = 10 hours per problem.

                      There is magic or right way. If you have technical debt that shows up when building, absolutely start with a baseline build, because 3/4 hour is less than 4.5 hours. If you have clean builds almost always, just think of this as a debugging technique after something goes wrong, because 10 hours is more than 4.5 hours.

                      Do the right thing.