1. 15

    Breadth-first, not depth. Defer relentlessly. Check with your primary goal regularly. Time-boxing.

    The trick with making meaningful progress and not spinning out on these tangents is pausing to recognize them as tangents. Only execute on a sub-task if it is necessary to complete your immediate goal. If a sub-tasks can be deferred, do that; You can evaluate if they are still useful later. Capturing them to get it out of your head should alleviate some of the pull that they have on you – they won’t be forgotten, but don’t need to be done now.

    And always be asking the question “Is this helping me solve my immediate problem?”. Why did you want the interactive debugger? Probably for more context. For debugging specifically, always be asking is there a dumber/simpler way to find concrete information? Just sitting and thinking through the specific context that you think you need may have allowed you to continue with print debugging might have short-circuited the tangent.

    The other tactic that can help is, when you start a sub-goal, estimate how much time it is worth to you, and set a timer. Had you valued the interactive debugger at 20m, and then the timer went off and you realized that you were about to re-install your interpreter. That is a good moment to re-evaluate. And having the concrete time box prevents you from losing an entire afternoon to a chain of those.

    As a reminder, maybe put a sticky note in front of you with your current goal. And keep checking in that you are still really working towards it.

    As for tooling, omnifocus and dynalist.io both have quick capture features for things you can defer until later. And dynalist.io and workflowy literally let you nest these tangents, which can be a visual signal when you’ve gone too far. But I think the crux of your question is more about focus and process and less about the tools.

    1. 1


      I’ve learned this recently and changing from depth first has been tough but very helpful. How did you land upon this concept? and how do you practice it?

      1. 1

        I’ve spent a lot of time in my own rabbit holes. Learning to slow down and take a step back is mindfulness and practice. Something I still strive for, and sometimes fail at.

        Practically speaking ideas like MVPs and Tracer Bullets are have pushed me in this direction – See Pragmatic Programmer. Getting something thing in front of people as a way to understand what is important, and to quickly change things that aren’t working without spending a ton of time on them has worked well for me. Doing things this way you go wide, shallow, and simple for each feature. Inevitably when you’re writing a feature a hundred ideas and edge cases and thoughts come up, and you just have to throw them at the end of the list – If the feature survives or is important you can go deeper on it. That has meant a lot of practice adding to the end of my todo list rather than the middle.

        As for “Breadth-first”, the original question’s use of recursion just made me think of depth-first graph traversal. So breadth-first being the opposite sprung to mind, though I was basically describing it as a FIFO queue, not so different I suppose.

        1. 1

          I love the idea of this, how a lot of people are think of and realizing the same thing at the same time as a product of the culture they love in.

          (I ordered para prog v2 two weeks ago)

          Thanks for your answer!

    1. 3

      http://dynalist.io - Similar to workflowy but more feature-full. I use it for my my worklog, errorlog, and lifelog. Extended notes are in markdown so in addition to rapid logging it is alright for long form writing as well.

      1. 4

        I’m part of a two person internal on-prem apps team, without a supporting ops team. Most of our apps have single machine deployments and don’t need to scale beyond that. Given this context docker seems like a lot of infrastructure overhead. If I had to scale, were in the cloud, or had a supporting ops team, I’d reconsider. But at the moment ansible for VM builds, a gitlab’s CI for tests, and git checkouts with nginx/passenger are doing a bang up job with minimal complexity.

        One other benefit I’ve head touted is that your dev environment is more similar to your prod environment. While I appreciate the goals of that sort of consistency, rails apps come with a number of development mode features which don’t play as nicely in the docker context (e.g. anything inotify based like code reloading or livereload). With rbenv & bundler I’ve found that the environments are similar enough.

        1. 2

          I’m excited for hosting some friends for the second round of D&D with my wife as DM!

          1. 9

            Whew, that new format is repetitive:

            targets = [ "//:satori" ]
            package = "github.com/buckaroo-pm/google-googletest"
            version = "branch=master"
            private = true
            package = "github.com/buckaroo-pm/libuv"
            version = "branch=v1.x"
            package = "github.com/buckaroo-pm/madler-zlib"
            version = "branch=master"
            package = "github.com/buckaroo-pm/nodejs-http-parser"
            version = "branch=master"
            package = "github.com/loopperfect/neither"
            version = "branch=master"
            package = "github.com/loopperfect/r3"
            version = "branch=master"

            How about a simple .ini?

            name = satori
            libuv/libuv         = 1.11.0
            google/gtest        = 1.8.0
            nodejs/http-parser  = 2.7.1
            madler/zlib         = 1.2.11
            loopperfect/neither = 0.4.0
            loopperfect/r3r     = 2.0.0
            buckaroo-pm/google-googletest = 1.8.0
            1. 6

              Toml can be written densely too, e.g. (taken from Amethyst’s cargo.toml):

              nalgebra = { version = "0.17", features = ["serde-serialize", "mint"] }
              approx = "0.3"
              amethyst_error = { path = "../amethyst_error", version = "0.1.0" }
              fnv = "1"
              hibitset = { version = "0.5.2", features = ["parallel"] }
              log = "0.4.6"
              rayon = "1.0.2"
              serde = { version = "1", features = ["derive"] }
              shred = { version = "0.7" }
              specs = { version = "0.14", features = ["common"] }
              specs-hierarchy = { version = "0.3" }
              shrev = "1.0"
              1. 6

                TOML certainly is repetitive. YAML, since it hasn’t come up yet, includes standardized comments, hierarchy, arrays, and hashes.

                # Config example
                name: satori
                  libuv/libuv: 1.11.0
                  google/gtest: 1.8.0
                  nodejs/http-parser: 2.7.1
                  madler/zlib: 1.2.11
                  loopperfect/neither: 0.4.0
                  loopperfect/r3: 2.0.0

                More standards! xkcd 792. I’m all for people using whatever structured format they like. The trouble is in the edges and in the attacks. CSV parsers are often implemented incorrectly and explode on complex quoting situations (the CSV parser in ruby is broken). And XML & JSON parsers are a popular vectors for attacks. TOML isn’t new of course, but it does seem to be lesser used. I wish it luck in its ongoing trial by fire.

                1. 1

                  YAML already has wide support so it’s quite odd it hasn’t been mentioned yet

                2. 4

                  More attributes are to come. For example, groups:

                  package = "github.com/buckaroo-pm/google-googletest"
                  version = "branch=master"
                  private = true
                  groups = [ "dev" ]
                  1. 1

                    Makes sense, I don’t see an obvious way to encode that in the ini without repeating the names of deps in different sections.