1. 9
  1.  

  2. 2

    I got to watch this last night and really enjoyed it, thanks for posting it. I like that it’s not about the technical features of the tools they settled on, it’s about the practice they have for experimenting with new technology and ramping up adoption based on those results. It’s great that he showed many places the experiments didn’t result in a tool change, so it’s clear this isn’t an exercise in justifying a pre-existing choice.

    I only wish he’d talked about how they evaluate experiments, if they’re setting goals and picking metrics up-front, or it’s a more organic “lessons learned” process after shipping.

    I wonder about their practice of not doing rewrites or otherwise flushing out old deps/tool choices. He mentioned they’re currently experimenting with Nix; perhaps that’ll prompt a big cleanup as they have to touch all those dependencies again. It sounds like the lava layer anti-pattern, but it could make sense if the old parts aren’t changing at all and have negligible maintenance costs (not clear from this talk).

    1. 2

      Yes, I enjoyed that aspect as well, and I was wondering why I didn’t apply that approach of small experiments more rigorously when I was leading a team. I ended up experimenting on the side, deciding on the merits, then rolling out new choices gradually. But I like the idea of having the process of choosing more integrated into the regular development process.

      Personally, I’d go with a “lessons learned” approach. Generally speaking, it would be hard to come up with metrics. For example, how would you compare defect rates if you’re only replacing a single widget? Or how would you measure the leakiness of new abstractions vs old?

      As for old code and tools, I think he mentioned that they evaluate it on a case by case basis. If the code is working, they just leave it alone. But if they have to make significant changes to it, or it’s otherwise causing problems, they replace it. Seems eminently sensible to me, and that’s how I did and would do it as well.

      It might be exactly like lava layers, but like so many things, it’s a matter of tradeoffs and costs. For example, if all of their developers are still comfortable with CoffeeScript, and it’s a single step in the build pipeline to turn into JS, then it might be costing them less to keep it than to replace it.

      Have you seen Basecamp’s approach to rewrites? It’s different, and I found it very interesting: when they reach some kind of tipping point, they write a new version of the product from scratch, but they don’t reproduce all the old features. For those customers who relied on particular features or don’t want to change, they leave the old version running and keep supporting it.