-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to "close" streams #146
Comments
Another idea is to maybe put the "closed" (and/or other) marker information into the filename, so that a scan of all streams can filter out closed streams early on without having to do additional IO on every file.
The interesting question is, how those streams should behave on typical operations. Reading from a closed stream should work as normal likely, but it should not be registered for indexing new events and trying to append to it should throw. |
Right now, any index/stream defined will live indefinitely and have every future event/document be checked against it's matcher, even if the stream is known to longer match any future events. The more streams get created, the slower the whole system will become. Some examples are time boxed streams, like by fiscal year/calendar week/etc. or by correlationId (processes).
Therefore it would be useful if streams could be marked "closed"/"finished", or maybe even "deleted" (to prevent them from being recreated).
One approach could be to define a specific combination of index entry values
(sequence number, file position, data size, partition id)
, e.g.(0, 0, 0, 0)
for this marker and check for that as last entry in the index when opening. Any such index should no longer be considered for checking new documents.The text was updated successfully, but these errors were encountered: