You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 3, 2024. It is now read-only.
I think should definitely try to find a way to save the total order in which the events where published the first time in order to ensure a consistent replay.
For the moment we use the 'timestamp' of the event. Until now 45 events have the same timestamp than 2 others and 26102 events have the same timestamp than one other but in my application. This is nearly 10% of my events.
This mean for the moment the system can potentially be in a different state after a replay although this does not seems to have been a problem in my case until now.
The text was updated successfully, but these errors were encountered:
Wow, thanks for the feedback on this. We should work it out asap, indeed.
I suppose the chances of having two events on the very same aggregate with the same timestamp are close to zero because the domain would not process them anyway. But this doesn't prevent from inconsistencies between aggregates.
Yes this could be a solution, a Redis counter would work too but wil add a useless dependency
We have to think if we can have holes in the sequence numbers, i don't think this is a problem, a total order is enough.
The important part is that these sequence numbers are correct according to the order the events where published to the listeners
According to the current code of the domain repository this will be difficult to implement since the events are published after being saved (while this behaviour remains important too) and the aggregates are handled in parallel even if each event of one aggregate are handled sequentially. (maybe time to handle this TODO https://github.com/jbpros/plutonium/blob/alpha/lib/domain_repository.coffee#L162 😄 )
A good place to generate and save this number could be the emitter but then the event store is inconsistent between the moment the event is created and emitted.
I think should definitely try to find a way to save the total order in which the events where published the first time in order to ensure a consistent replay.
For the moment we use the 'timestamp' of the event. Until now 45 events have the same timestamp than 2 others and 26102 events have the same timestamp than one other but in my application. This is nearly 10% of my events.
This mean for the moment the system can potentially be in a different state after a replay although this does not seems to have been a problem in my case until now.
The text was updated successfully, but these errors were encountered: