You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Python, there are two paths to achieve concurrency/parallelism: the threading module and the multiprocessing module. Currently, only a threading lock is used when synchronizing the storing and deleting of objects, which is completely bypassed when using multiprocessing.
To Do:
Add a multiprocessing lock and respective shared list (to track what identifiers are locked) for storing and deleting objects and metadata.
Document how to use multiprocessing locks (e.g., setting up a global variable before calling HashStore).
Check to see if there is any other path to do this that may be simpler
Documented how to switch HashStore from threading synchronization to multiprocessing. There may be a more elegant way to do this, but I feel this is good enough at this time.
In Python, there are two paths to achieve concurrency/parallelism: the
threading
module and themultiprocessing
module. Currently, only athreading
lock is used when synchronizing the storing and deleting of objects, which is completely bypassed when usingmultiprocessing
.To Do:
HashStore
).Example Code:
The text was updated successfully, but these errors were encountered: