1. 23
  1.  

  2. 3

    I found that the same principle that lets a 13-person crew navigate the world’s largest container ship to a port halfway around the world without breaking down also applies to startups working towards aggressive growth goals: Simple systems have less downtime.

    Is the Triple-E actually an example of that? I don’t know anything about ships, but I did a bit of searching and the actual expected crew is about 20 people, which is apparently reasonable for a ship this size? And a lot of the press releases about it hail its environmentally-friendly design which adds a lot of extra complexity.

    1. 4

      In February 2011 Maersk announced orders for a new “Triple E” family

      | No 1 | Delivery 02 Jul 2013 | in service |

      OK, finished after 2 years. Now it is 6.5 years later and I bet those ships weren’t retrofitted to transport livestock instead of containers. Sounds reasonable? Yeah. Software often doesn’t work like that.

      Please excuse me being snarky, but software very often starts out as a “simple system”, until people want to change it, usually it’s not the developers.

      That’s not counting the many weeks of downtime they experienced throughout the past few years, nor the many weeks of downtime they would experience in the future if we did not overhaul the underlying systems

      So the two systems they migrated to would not have experienced any downtime in the past and will not experience any downtime in the future? Cool, I might just sign up there…

      Of course the main takeaway is a good one but the examples all made me go “huh?”.

      1. 2

        OK, finished after 2 years. Now it is 6.5 years later and I bet those ships weren’t retrofitted to transport livestock instead of containers. Sounds reasonable? Yeah. Software often doesn’t work like that.

        Yes, but it doesn’t work like this by choice. I’ve seen multiple organisations where indeed you wouldn’t use the shipping database for livestock (eh… to stay in that image), but would develop a new tool for that. It’s highly efficient.

        Please excuse me being snarky, but software very often starts out as a “simple system”, until people want to change it, usually it’s not the developers.

        You say that like this is a natural law. It isn’t. The pleasure - and pain - of software is that it is easy to request such changes, because change is so accessible.

        1. 2

          That’s nice if you live in a perfect world, I’ve experienced more often than not that stuff has to be done yesterday, hacky, and not in a new system, and especially not by being able to plan how to properly tackle this. Just handing your notice every time just because no one gives a damn about anything resembling proper engineering practices doesn’t work, really.

          But I guess you’re missing the point though. Of course software is a lot easier to change than physical objects, that’s why it is being changed -at all-. Which is usually a good thing. But as it grows there isn’t always a way to keep it simple. You can overengineer and overcomplicate something from the start and that’s bad, this is where I agree with author. But accidental complexity happens very often when stuff is being changed again and again - and that’s where I see the analogy veering off. Software isn’t built to spec and to last years without being touched.

          1. 2

            That’s nice if you live in a perfect world, I’ve experienced more often than not that stuff has to be done yesterday, hacky, and not in a new system, and especially not by being able to plan how to properly tackle this. Just handing your notice every time just because no one gives a damn about anything resembling proper engineering practices doesn’t work, really.

            But this is precisely the point that the blog post is getting at: these practices often reach a point of massive disaster. Effective organisations are effective exactly because they don’t put hack on top of hack.

            That has nothing to do with a “perfect world”. It has to do with the fact that a lot of tech managers are badly educated.

            But I guess you’re missing the point though. Of course software is a lot easier to change than physical objects, that’s why it is being changed -at all-. Which is usually a good thing. But as it grows there isn’t always a way to keep it simple. You can overengineer and overcomplicate something from the start and that’s bad, this is where I agree with author. But accidental complexity happens very often when stuff is being changed again and again - and that’s where I see the analogy veering off. Software isn’t built to spec and to last years without being touched.

            No, I’m not missing the point. The point is that physical projects have more railings where you can’t fall off (a ship just won’t effectively move below a certain amount of power). Software is a lot more free, so we need to choose those boundaries ourselves.

            If you can manage that: all power to you, you get all the power of software. If you mismanage that process, you end up with a terrible mess - one of the messes that the original author describes. Complexity management is a fundamental core of software management.

            I hold the belief that a lot more software would work better if it were built to some form of spec.

            1. 2

              So I think we’re agreeing on software development practices, but my main gripe was with the examples, which I think are not good. :) And I’m maybe just bitter that - as a regular developer - I usually am not in the position to veto bad practices.

              1. 2

                Hey, yes, then we are probably of similar opinion. It’s okay to be bitter at the state of the industry. And to be clear - all I spoke about is managements privilege and responsibility!

                Thanks for this exchange!

      2. 2

        I definitely understand what the author is trying to say, and agree with it. I don’t think his examples really apply though.

        When he mentions switching CRMs and reducing the number of processes at the same time for example, I’d bet he could have reduced that number of processes with no CRM changes. And their new system will keep accruing new processes and be in the same state in a few years. When left unchecked, all systems will accrue processes, that will require shrinking.

        Similarly, he mentions folks leaving the company and nobody to own the systems in their place. This is not an issue of complexity. It’s an issue of organization. There should always be more than one person aware of how the system works so this shouldn’t happen.

        Finally, I would argue that yes, more complexity can bring less downtime, but it has to be the right complexity. In the first image he links (the pigeon in the bottle), that’s not redundancy at all. In order to have redundancy done right, there should have been two pieces of paper. One in the bottle, and the other one with the pigeon, flying. That would have been an increased complexity done right (or at least better).