You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to ask for a new flag for the hasher backend that will make the database read-only when downloading a file from the remote.
This might be useful to keep track of data corruption while working with backends that don't support checksums.
To further explain my point,
as it is now, while uploading a file to a remote, the hasher will hash on the fly and store it in the database.
If the hash of the file changes afterwards (be it for network issues after the sum or anything else linked to the remote) and you download it again, the hasher will check it against the one in the database and update the entry with the new hash, no retries are made (-vv shows as such).
So my idea was: instead of simply update the entry, rclone would retry to download the file until the given number of --retries (since the issue might still be during the downloading process and the file might be actually fine on the remote) and eventually fail the transfer with the error "hashes differ."
This way the entry in the database won't be updated and you can read from logs that file changed from the one you had at the beginning.
In addition to this, @nielash suggested the addition of another interesting flag such as a fully-read-only mode of the database working exclusively with manually imported hashes.
The text was updated successfully, but these errors were encountered:
Hello,
I'm opening this feature request issue following @nielash suggestion under my post in the forum.
I would like to ask for a new flag for the hasher backend that will make the database read-only when downloading a file from the remote.
This might be useful to keep track of data corruption while working with backends that don't support checksums.
To further explain my point,
as it is now, while uploading a file to a remote, the hasher will hash on the fly and store it in the database.
If the hash of the file changes afterwards (be it for network issues after the sum or anything else linked to the remote) and you download it again, the hasher will check it against the one in the database and update the entry with the new hash, no retries are made (
-vv
shows as such).So my idea was: instead of simply update the entry, rclone would retry to download the file until the given number of
--retries
(since the issue might still be during the downloading process and the file might be actually fine on the remote) and eventually fail the transfer with the error "hashes differ
."This way the entry in the database won't be updated and you can read from logs that file changed from the one you had at the beginning.
In addition to this, @nielash suggested the addition of another interesting flag such as a fully-read-only mode of the database working exclusively with manually imported hashes.
The text was updated successfully, but these errors were encountered: