0

Event-sourcing often implies to have one row per aggregate id :

event_id event_type entity_type entity_id event_data
102 OrderCreated Order 101 {...}
103 OrderUpdated Order 101 {...}

This is perfectly fine, since it allows for an aggregate to be rebuilt by applying all events for a specific entity_id

But let's take a more advanced scenario. You have 2 aggregates :

  • One is a Device aggregate, that manages the location (ip..) and properties (id, name..) of some power measurement devices installed in a company.
  • One is a ServiceAssignment aggregate, which allows a user to assign a list of services on one device, to monitor some electrical measurements ("CurrentService", "VoltageService", ...). The list of services is big (> 1000), because a device can monitor a lot of different electrical measures. Not all possible measures are monitored, the user can choose which measures can be monitored by assigning them on the UI. This makes it impossible to have everything in one single bloated aggregate (and this is a simplified example, there is in fact more entities in each aggregate)

Some business rules :

  • A device can be deactivated, in this case all ServiceAssignment must be deactivated also (not monitoring anything anymore)
  • A device can be activated, and some services deactivated.

In this scenario, when a user choose to deactivate all devices on the UI, this could lead to 200 DeviceDeactivated events (which is still fine). But now, each ServiceAssignment that listens to DeviceDeactivated, does in turn deactivate all services matching the device_id, which triggers a "ServiceAssignmentDeactivated" per assigned service. In this case we have 200 x 1000 ServiceAssignmentDeactivated that will be fired, which I think is already too much.

We could imagine a MassServiceAssignementDeactivatedEvent instead, with a list of ids inside, but in this case, you cannot store it in the same way in the event store. It makes it difficult to rebuilt the aggregate, because entity_id column is not applicable anymore. It's hard to find some online resources on this precise point. How would you handle such N+1 that could lead to potential million of events ?

2
  • Are you familiar with stream processing or buffer queues? Commented Jun 10, 2023 at 17:04
  • Not really, but if you are thinking about Kafka,it's not an option here, because our primary target is an edge device with limited cpu/ram Commented Jun 11, 2023 at 22:00

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.