1. 21
  1.  

  2. 5

    Hmm. I see what he is saying… but speed is secondary to testability and maintainability

    Style C ends up with a huge long function where you can’t see what the inputs and outputs and scopes of variables.

    By a happy accident I have evolve a slightly strange style…

    If I meet a longish function, I try reduce the scope of all variables as much as possible.

    eg. void a( b_t b, c_t) { d_t d; e_t e; ……………………/lots of shit/ }

    I refactor to…

    void a( b_t b, c_t)
    {
       .............. // some stuff
       d_t d; 
       ........ // stuff that uses d
       e_t  e;
        ........................// stuff that uses e and d
    }
    

    Except splint (a static analysis tool) doesn’t understand declaring variables anyway except at a beginning of a block.

    So I’m forced to…

    void a( b_t b, c_t)
    {
       .............. // some stuff
       {
          d_t d; 
           ........ // stuff that uses d
           {
              e_t  e;
              .......................// stuff that uses e and d
           }
          ... stuff that only uses d
       }
       ..... // Stuff that doesn't use d or e
    }
    

    So that takes me pretty much to Carmack’s “inlined” functions, but I know the scopes of all the variables.

    And if I get unhappy with the fact I don’t know the scopes and usage of the parameters or the inputs and outputs of my mini-scopes… I pull them out into his Style A.

    If I’m really worried about efficiency, I mark them “static” and gcc will inline them for me and optimised away all the stack frame creation.

    Minimize control flow complexity and “area under ifs”, favoring consistent execution paths and times over “optimally” avoiding unnecessary work.

    One pattern I try to use for that is the Humble Function pattern.

    I try separate functions that manipulate state from functions that manipulate control flow. ie. I try not to have functions that do complex calculations and state manipulation AND high cyclomatic complexity. Only one or the other.

    eg.

    if( foo) {
       ...//lots of complex calculation
    } else {
      ...// Other calculation and state fiddling.
    }
    

    refactored to..

    if( foo) {
        funcA();
    } else {
        funcB();
    }
    

    Means I can test the hell out of funcA and funcB separately without needing to wrangle foo, or conversely if I have a sequence of conditionals not overlapping… I pull them into functions and have a simple humble outer function that has no conditionals just call A(); call B(); call C();…. and I don’t need to test it, I just need to eyeball it!

    Part of what he is saying is code layout should be optimized to display the Happy Path through it.

    1. 6

      but speed is secondary to testability and maintainability

      Considering who he is and the fact that he breaks into a cold sweat over losing a single frame of latency, I think he has different priorities.

      1. 0

        Adding a frame of input latency is an issue with the game’s logic, not a performance issue. You could have a game which spends 90% of each frame idle with a logic bug which delays inputs by 1 frame, and it will feel more sluggish than a game which spends 90% of each frame busy but doesn’t have an unnecessary frame of input latency.

        What Carmack advocates here is a style which he thinks (or at least though in 2007) will reduce bugs and logic errors, which actually affects gameplay, even if it reduces average-case performance and the game spends less time idling between frames.

        1. 0

          I’m simply not convinced that by making things testable and maintainable you sacrifice speed.

          Why? The first order effect in speed is what basic algorithm you choose. The better your design the easy it is to plug in a fast algorithm if need be.

          The next is “no code is faster than no code”. ie. Fast code is simple.

          In the Bad Old Days compilers were bad at inlining calls and optimizing… these days they’re very good.

          What he is talking about here…

          Minimize control flow complexity and “area under ifs”, favoring consistent execution paths and times over “optimally” avoiding unnecessary work.

          Could be restated as “don’t stall the pipelines and keep the caches hot”, a pipeline stall costs more than a wasted instruction or two..

          1. 7

            I think games programming in general has different priorities and justifiably so.

            1. 0

              From what? I’m in the embedded world, and they will always be pushing us to make do with a smaller, cheaper, lower power consumption whilst delivering more features.

              1. 2

                For a non networked game, security is basically a non issue to start. I just mean you can ignore certain things depending on the game, I wouldn’t be surprised if unit tests aren’t important for many games either. Battery only matters for hand hold devices too.

                1. 1

                  It’s not just games programming in particular but these days he’s mostly working on VR where that single frame can mean the difference between a smooth experience or VR sickness. Also the VR products still need to get smaller (for comfort) and more powerful (so one doesn’t need to be tethered to a desktop PC) so performance is indeed the primary driver.

                  1. 1

                    The funfunfun thing about embedded is there are always good strong economic reasons for using a mechanically smaller, lower power consumption, lower cost (and fewer) parts…. and yet have harder real time deadlines that human perception.

                    ie. Programming efficiency is vitally important, more important than in games or super computing…. but I will still argue maintainability is more important, because that is how you get to being the most efficient.

          2. 2

            In principle I agree, but I have had a couple times where “inline” would let you realise you could collapse multiple functions into a single thing and end up with clearer, simpler, and faster code.

            A common example is something like:

            def f(x):
              if x.bar: 
                  ....
              else:
                  ...
            
            def g(x):
              if x.bar: 
                 ....
              else:
                 ....
            
            def h(x):
               return f(x), g(x)
            

            here inlining on h will mean you can share the if statement and potentially share some code across f and g to end up with something decently small and simpler to walk through

            This is less of an issue when f and g are really well defined business concepts, but if they’re things like do_thing and do_other_thing you can end up with a bit of a nicer system

            1. 1

              Yup, a good thought, I tend to do that by trying to extract the conditionals out of my function and then replacing them with precondition asserts…

              ie. create a f_bar and f_not_bar where it’s a precondition for invoking f_not_bar that !x.bar is true.

              It’s amazing how preconditions can force simplification of code.

          3. 1

            I agree with a lot of the points he makes, but testing is the fly in the ointment. It’s much harder to test a 200 line function as compared to a couple smaller functions.

            1. 2

              I use this style all the time for batch processing glue code that’s not easy to unit test anyway. It makes sense to limit the scope of variables as much as possible. I regularly promote variables to higher levels of scope than what I initially predicted when they’re heavily used. It’s cleaner, and easier to refactor than threading unrelated state values in and out of multiple functions with awkwardly constructed structs or tuples.

              1. 1

                He’s not talking about pure functions, where a granular separation of functionality improves testability, but rather cases where the program coordinates many stateful calls. Unit tests of functions split out from that kind of procedure don’t actually tell you much about the correctness of the program and generally become dead-weight change-detectors.

                1. 1

                  I agree that change-detector tests are worthless. I guess if there are no pieces that can be separated out as pure functions then yes, inlining makes a lot more sense.

              Stories with similar links:

              1. John Carmack on inlining functions via wting 4 years ago | 19 points | 1 comment