1. 18
  1.  

  2. 6

    So types’ complexity is related to their size. An interface using chars is simpler than one using ints. Using ints is simpler than unsigned ints because signed is the default if you do not explicitly specify signedness (except in the case of char).

    Structs with pointers to structs of the same type double the struct’s complexity. So if you have a big struct, and you need to put it in a linked list, well you’d better create another struct that points to your big struct. To avoid doubling the complexity! Alternatively, you could just sneak a pointer-to-char in to your big struct, because that’s simple. Or maybe pointer-to-void?

    What else. Use more typedefs, because they’re good and halve complexity. Meanwhile, object-like macros are as good as typedefs, so use more macros. Function-like macros are better than functions because they have no return type.

    EDIT: Typedefs and macros are “good” because they’re abstractions. But enums are just integers so they aren’t any better than using plain ints. The logic.

    I think this is absolute bullcrap. Also I don’t see any science (compsci or otherwise) here. These definitions are out of thin air and poorly motivated.

    This complexity alone offers developers hints to which libraries should be studied to have potential efficiency gains. It can also be used to keep code complexity to a maintainable level.

    What a conclusion. Supported by no evidence.

    EDIT2: There’s hardly anything on OpenBSD here. A doubly linked list (with the LIST_* macros) is used as one example somewhere.