1. 35
  1.  

  2. 11

    In short, it’s important to understand the problem before trying to solve it.

    1. 16

      Well, that’s half of it. The other half is knowing that there’s a single x86 instruction that can replace a chunk of your code and give you part of the answer in the blink of a CPU’s eye :)

      1. 3

        I would argue that the instruction probably takes a relatively long time from the perspective of the CPU, but yea the it’s also important to understand your tools!

      2. 3

        And understand if there is a problem. If you haven’t profiled then optimisations are probably a waste of time.

      3. 7

        One of the major things I do is write high-speed network traffic capture engines, with TCP stream reassembly and protocol decoding and all that jazz.

        Obviously we want to dump state from the TCP reassembly engine when we haven’t seen anything on that stream for a while (or so little that it’s a slow loris type attack, whatever).

        In the beginning we had a beautiful complex heap thing with connection records ordered by next time check thing and each time we saw traffic we’d move the records around, do timeouts, etc.

        Turns out just having a doubly-linked list of connection records works a lot better. Each time we see a packet for a connection we look the connection up via a hash on the five-tuple; the record has several chase pointers that link it into various lists.

        One of the lists is the timeout list. Every time we get a packet for a connection, we move it to the back of the list, an O(1) operation, and update the seen time for that record. Every so often, we walk from the start of the timeout list until we see a time that is new enough that it hasn’t timed out, and we stop. This is O(n) in the number of simultaneous connections.

        Sure it’s not as fancy or cool as a heap (which has better bounds for every operation involved except timestamp update), but it’s a whole lot simpler and scales well enough that it’s been overall faster for up to a couple of hundred thousand simultaneous connections.

        1. 5

          not related,

          But When somebody asks me ‘what programming language is fast ?’.

          I usually reply, those are the ones that can saturate a modern network pipe, with useful data, after transforming it, coming in on another pipe.

          I do not know what language you use, but I suspect, given your type of work, it is one of the ‘fast ones’.

          1. 3

            That part’s in C. :)

          2. 2

            This is a bit off-topic but do you have any recommendations for books that are similar to Network Algorithmics in content?

            1. 3

              I’m afraid I’m not familiar with that one (though I just bought a copy off Amazon, so thank you for the recommendation).

              Most of what I’ve read for this sort of thing are protocol specifications (RFCs, etc) and other (open source) implementations. I’m sorry that I couldn’t be more helpful.

          3. 4

            I am impressed, but there’s a part of me that would rather see the asymptotically better solution. Here are the tradeoffs from my point of view.

            Brute force solution:
            • Fast for short strings in “line-of-business solutions”
            • Unexpected input will be disastrously bad
            Asymptotically better solution
            • Probably still fast enough for perf requirements of “line-of-business solutions”
            • If performance on a specific workload matters, easier for senior person to pick a better algo
            • Gracefully scales up