1. 10
  1. 4

    Interesting that it is perceived that the alternative to Event Sourcing is “CRUD”. Is this the common viewpoint of programmers these days?

    If we need to build a complex international payroll system, or a large bank lending system, or a nationwide new insurance quotation system, since when are the architectural choices for the implementation of functional requirements Event Sourcing or … CRUD?

    1. 2

      While not specifically CRUD, I often see actions on data as a predominant development approach for many developers. This leads to the events often later being seen as a second order effect (an audit of change) rather than the key function of the systems.

      1. 2

        What’s the third alternative then?

        1. 1

          Well what are the more typical choices for representing concepts, logic and constraints in a complex problem domain? CRUD is orthogonal to all of them …

          • object domain model
          • transaction scripts in a service layer (managers/services)
          • database entity (table) as an object
          • controller-entity style (per Fowler P of EAA)
          • information model (database) and state-machines transformed with dataflows
          • entities as change records
          1. 1

            I’m not sure you’ve read the article…

            The post’s point is that while it’s generally seen as much easier to use CRUD for small systems, and reserve event sourcing for bigger, more complex systems, one can actually start using parts of event sourcing much earlier in scope.

            If you’re planning on creating “ a complex international payroll system, or a large bank lending system, or a nationwide new insurance quotation system”, plain CRUD is not the right choice, but the article doesn’t state that.

            1. 7

              My point is that CRUD isn’t an architectural choice at all.

              1. 2

                Thanks for expanding!

      2. 1

        I am using cqrs/event sourcing (lite I’d call it - using something mostly derrived from https://kickstarter.engineering/event-sourcing-made-simple-4a2625113224) for a (very) few things at $DAYJOB and yes the ‘audit log for free’ is a large part of why I like it but I also find it provides a nice pattern for defining a clean, low coupled API for updating the data models that is both resusable/composable and easier to test in isolation.

        1. 1

          Event sourcing is such an intriguing pattern that has deceptively sharp edges. Ive heard the “you get an audit log for free” a few times but the event store ends up being a very poor audit log and querying the raw events is usually hard/cumbersome or impossible depending on how you store them. My audit logs always end up being yet another projection.

          Ive found that event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers.

          I feel like its a powerful pattern even on the small scale but you need a disciplined set of developers and a strong ops strategy to make it work

          1. 4

            event sourcing only gets nasty when you inevitably have a breaking change to an event contract and have to upgrade/downgrade events for different projections/handlers

            Our solution to this was a robust capability model. Capabilities limit access to resources, but a change in a capability is itself an event. So at the point when a contract changes, that change is itself modeled in the event log, and hence only affects events that occur after it.

            1. 2

              Our solution to this was a robust capability model.

              That sounds really interesting! Is there something I can read, or can you say a bit more about that?

            2. 1

              The article mentions being able to completely rebuild the application state. It makes sense in theory, but how does it work out in practice?

              I imagine that you might have to do event compaction over time or else the event storage would be massive. Seems like an area where those sharp edges might come out.

              1. 3

                A lot of folks end up using snapshots ever n events or something discarding the previous events. Its an optimization that becomes fairly essential quickly

                1. 2

                  I run a darts scorer in my spare time. The state of every single game is only stored as an event stream. Account and profile data uses a conventinal relational model.

                  I never rebuild the entire application state, only the game I’m interested at. Should I introduce new overall statistics I would just rebuild game after game to backfill the statistics data.

                  Storage and volume hasn’t been a problem on my tiny VPS. PostgreSQL sits at 300.000 games with each one having about 300 events. If you want I can lookup the exact numbers in the evening.