1. 15

  2. 4

    The thing I’m always missing in these data-oriented design tutorials is how the reorganized data structure integrates back into the higher level application’s logic. Like in this case, you’ve decomposed the game object into this array of dense structs - how are those FooUpdateIn structs related back to data from the original GameObject when necessary? Is this all typically decomposed into arrays of structs and index ids are used?

    Edit: I guess what I’m getting at is - is there any way to easily mix in a bit of this pattern into an existing codebase?

    Taking the blog post as an example, I could change GameObject to storing a pointer to a FooUpdateIn, but then if those are stored in an array, how do I manage FooUpdateIn lifecycle without completely switching to an entity system (to deal with pointer invalidation on array resize) or some form of pooling/custom allocation scheme (to keep from invalidating pointers, but also re-allocating any unused indexes as deletes make the array sparse).

    It seems like the easiest ‘drop in’ would be to have GameObject hash an id into a map of FooUpdateIns to find its own, then UpdateFoos can iterate the map’s backing array - but how do I reason about sparse entries in the backing array, or is perf still so improved that it doesn’t matter? (and annoyingly, cpp stdlib doesn’t ship with a useful map for doing that)

    1. 2

      You can hash from entity ID to array index, and then also pack entries tightly in the array. When you delete an entity you swap its data with the last element in the array for O(1) removal.

      If you go full entity-component system you really want to have all entities of the same “archetype” (Unity term - entities with the same set of components) to be packed together so you can quickly iterate over all the entities with some set of components, but actually coding that is incredibly annoying in C++

      1. 1

        The idiom (from game programmers’ perspective) is called Data-Oriented Design. You can find online resources, including a book, about it: https://github.com/dbartolini/data-oriented-design

        Scientific computing, coming from a very different direction, has ended up in a very similar place (originally leading back to APL I think), but the domain is different enough that it probably isn’t relevant to your interests.

      2. 2

        This is a great article, making this topic accessible. However:

        That’s why it’s so important to write optimizable code from the start, keeping the slowest part of the machine in mind: the memory subsystem.

        (emphasis mine)

        I have to disagree about the from the start bit. I have seen new programmers worry about memory alignment after reading an article like this before even finishing a first prototype of their first game, subsequently spending a lot of time debugging just because the code was no longer easy to read. I believe there is a tradeoff between making code optimizable and making it intuitive.

        I would rather advise to start thinking about memory packing after performance becomes an actual bottleneck and the main performance bottlenecks are actually identified. Admittedly, this should go hand in hand with the advice to profile early and often.