Threads for scottnonnenberg

  1. 4

    At first I thought this was going to be interesting as it talks about some framework for discussing complexity. But then it devolved into the standard Agile post which just makes vague hand-wavey statements. It also seems to confuse Agile with Scrum. There are other ways to order ones time than sprints.

    At the very least, I agree that if one adopts one thing it should be retrospectives. However, there is a huge range in what a retrospective is like. And they are only useful if you can commit to executing on the output, so even just a retrospective is not valuable by itself.

    1. 1

      I think you’ll appreciate the second post:

      This series of posts came out of a contract I did where teams were going through the motions of ‘Agile’ without understanding that they were actually ‘Scrum,’ nor understanding why each component was important. I wanted them to drop the practices if they weren’t getting anything out of it.

      1. 1

        There are so many agile blog series on the internet I don’t really see a need to read another one based on this initial post.

        1. 1

          I’d love some links to those better posts, if you’re willing to dig a couple up!

    1. 2

      I like Aral, and am disappointed to see this. Do note that it is from back in 2012. Since then he has launched with his Indie Manifesto ( which I find to be quite inspirational. When he says design, I understand it to mean the entire behavior of the application and how it interacts with the world, including things like privacy, and how data is used behind the scenes (who is it sold to? how is it combined with what other data?).

      1. 1

        In the worst case, if you have a multi-stage async operation in progress, it can even result in corrupt or inconsistent data. What else do you expect if you immediately take down the process?

        Thinking that your system design can punt on asynchronous interruptions is always wrong. The computer itself can always be interrupted at random points (power loss, network partitions, etc), so you have to deal with crash safety anyway.

        1. 1

          Yeah - the thing that was shocking to me, in the Node.js community, is the “just let it crash” mentality with no related “remember to design your system to be resilient to this kind of thing.” It kind of makes sense when you consider the types of people getting pulled into Node.js programming - people who were solely frontend before aren’t necessarily used to thinking about fault-tolerance and data consistency. Hence the article. :0)

        1. 2

          The first couple seem closely related (to me).

          Either rushing or putting off the review is typically the result when the author is thinking about the code but not the people (particularly common examples are large changesets or using tools/features not found elsewhere in the codebase).

          I think point four is great, but could go further - there’s vast scope for automation to support a reviewer. Some examples of things a review-support bot could do:

          • For CSS changes, automatically attach before/after screenshots in common browsers.
          • Provide a link to view all calls to a modified function.
          • Identify new code paths that aren’t covered by tests (they are worth knowing about even if you leave them as is).
          • Inform the reviewer how frequently this code is modified (ideally, frequency of bugfix vs feature changes).
          • Identify which new tests would fail on the old code & which would pass
          1. 1

            I would really like some of those tools myself. Today I generate the screenshots manually for visual changes. :0)

   is a tool I’ve used which provides your third bullet. It will analyze the pull request diff and the code coverage report for that branch, and generate a focused pull request-specific coverage report. I like it a lot. Here’s a very simple example: