-
Notifications
You must be signed in to change notification settings - Fork 86
Description
Problem
We need to reliably and concurrently sync tasks and comments to and from GitHub in a non-blocking way.
By providing a timestamp-based concurrency control system we can use a known algorithm to make our GitHub integration more robust.
More importantly, we will be able to unblock our other objectives. We cannot proceed with onboarding projects or volunteers unless GitHub sync is stable, since our overall strategy depends on us connecting volunteers to tasks.
Tasks
In scope
- Cleanup and decouple existing modules. Goal is to flatten them out as much as possible, to make it easier to facilitate a queue system
- Add
TaskSyncOperationmodel - Add
CommentSyncOperationmodel - Create a
TaskSyncOperationwhen issue webhook is received - Create a
TaskSyncOperationwhen pull request webhook is received - Create a
TaskSyncOperationwhen the task is created/updated from the client - Create a
CommentSyncOperationwhen issue comment webhook is received - Create a
CommentSyncOperationwhen the comment is created/updated from the client - Consider timestamps from GitHub to be the latest - i.e. don’t be pessimistic (due to second-level granularity) https://platform.github.community/t/timestamp-granularity/4663
- Define proposal for the queuing system
- Add an admin dashboard for the operations
Out of scope
- Add back pressure for rate limits
- Respond to 304 not modified for both GET and PATCH
- find_or_create vs create_or_update (we should probably change to find_or_create) → XLinker
- Add fetch step after receiving the webhook
- Provide queue feedback to the user for the task
- Provide queue feedback to the user for the comment
- Figure out if user’s are only seeing what they’re allowed to see (primary concern are installations)
- Double-check timestamp when processing
- Figure out if an atomic step system is feasible, where we would not need operations and instead have each record update be something that’s ok to be executed individually.
- Think about breaking apart sync steps into their own “operations” vs Ecto.Multi transactions
Outline
We would have a sync operation for each type of internal record we want to sync. For example:
TaskSyncOperationCommentSyncOperation
Every sync operation record, regardless of type, would have a:
direction-:inbound | :outboundgithub_app_installation_id- theidof the app installation for this syncgithub_updated_at- the last updated at timestamp for the resource on GitHubcanceled_by- theidof theSyncOperationthat canceled this oneduplicate_of- theidof theSyncOperationthat this is a duplicate ofdropped_for- theidof theSyncOperationthat this was dropped in favor ofstate:queued- waiting to be processed:processing- currently being processed; limited to one per instance of the synced record, e.g.comment_id:completed- successfully synced:errored- should be paired with a reason for the error:canceled- another operation supersedes this one, so we should not process it:dropped- this operation was outdated and was dropped:duplicate- another operation already existed that matched the timestamp for this one:disabled- we received the operation but cannot sync it because the repo no longer syncs to a project
Then each type would have type-specific fields, e.g. a CommentSyncOperation would have:
comment_id- theidof ourcommentrecordgithub_comment_id- theidof our cached record for the external resourcegithub_comment_external_id- theidof the resource from the external provider (GitHub)
If the event is due to the resource being created, there will not be a conflict. If the resource was created from our own clients, then there is no external GitHub ID yet. The same is true of events coming in from external providers and there is no internal record yet. I'm not yet clear as to whether we should conduct any conflict checking on these event types, but my guess is no. It should likely jump straight to :processing.
When an event comes in from GitHub we should (using a github_comment as our example):
- delegate to the proper sync operation table for the particular resource (in our example this would be
comment_sync_operations) - check if there are any operations for the
github_comment_external_idwhere:- the
github_updated_atis after our operation's last updated timestamp (limit 1)- if yes, set state to
:droppedand stop processing, setdropped_forto theidof the operation in thelimit 1query
- if yes, set state to
- the
github_updated_attimestamp for the relevantrecord_is equal to our operation's last updated timestamp (limit 1)- if yes, set state to
:duplicateand stop processing, setduplicate_ofto theidof the operation in thelimit 1
- if yes, set state to
- the
modified_attimestamp for the relevantrecord_is after our operation's last updated timestamp- if yes, set state to
:droppedand stop processing, setdropped_forto theidof the operation in thelimit 1query
- if yes, set state to
- the
- check if there are any :queued operations for the
integration_external_idwhere:github_updated_atis before our operation's last updated timestamp- if yes, set state of those operations to
:canceledand setcanceled_byto theidof this event
- if yes, set state of those operations to
- check if there is any other
:queuedoperation or:processingoperation for theintegration_external_id- if yes, set state to
:queued
- if yes, set state to
- when
:processing, check again to see if we can proceed, then create or update thecommentthrough the relationship on the record forcomment_id - when
:completed, kick off process to look for next:queueditem where thegithub_updated_attimestamp is the oldest
We would also need within the logic for updating the given record to check whether the record's updated timestamp is after the operation's timestamp. If it is, then we need to bubble the changeset validation error and mark the operation as :dropped per the above.
Some upsides of the approaches above that I wanted to document, in no particular order:
- The tracking above generates some implicit audit trails that will be helpful for debugging.
- Any unique-per-record queued operations can be run in parallel without issue, i.e. we can run operations for
%Comment{id: 1}and%Comment{id: 2}without any conflict. - We can avoid "thundering herd" problems when the system receives back pressure by having control over precisely how the queue is processed.
- We can use this in conjunction with rate limiting to only process the number of events we have in the queue for the given rate limit and defer further processing until after the rate limit has expired.