-
Notifications
You must be signed in to change notification settings - Fork 36
Clean out old data/runs? #750
Comments
caveat that I'm a packrat, but I like this option as it allows stats about how the project has grown:
|
Another option is that we could de-dupe data. This would be more technically difficult to transition to, but we could save a lot of data in S3 by hashing the output ZIPs and adjusting the database to point to the first copy of the data that is exactly the same, deleting all duplicates. For example, the most recent 5 runs of |
S3 has several Storage Classes, we have our buckets set up to move older data to progressively cheaper storage as it ages and then it gets deleted, here's the terraform config for achieving that: resource "aws_s3_bucket" "unique-name" {
bucket = "bucket-name"
acl = "private"
versioning {
enabled = false
}
lifecycle_rule {
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 60
storage_class = "GLACIER"
}
expiration {
days = 240
}
}
} |
I'd also be 👍 for moving the data to cold storage rather than deleting forever. |
Yep, we already have 50+ TB in Standard IA as part of a transition at 30 days. Part of the reason I want to delete old data is that people/spiders (ignoring our robots.txt) go back and download that really old data, which adds extra to our bandwidth bill. |
Can a spider access data in glacier, or does it need to be defrosted by its owner in order to be available? I am also a packrat and I’d hate to lose information, so moving older files to glacier where they might be publicly inaccessible does seem like a good option. |
Re: de-duping the data, we could switch to a content-addressable URL scheme that makes this happen automatically moving forward. I believe we already store the hash of the zip contents, so there’s no need to recalculate this. |
I think you'll need a new private bucket, once the files are moved there then no-one is getting at them except who you allow :) |
Actually, we (geocode.earth) would be happy to hold the historic data in a bucket we control (and pay for), we could then allow whoever to take copies (so long as they do it from the same aws datacentre). This would actually suit us too because we'd like a copy of everything anyway. |
How much data are we talking about exactly? (all of it) |
Huh, I was thinking about bandwidth when I said 50TB before. We've got 6.2TB in Standard-IA, 183GB in ReducedRedundancy (from back when it was cheaper than Standard), and 186GB in Standard. Now that we no longer have any un-authenticated requests going to the bucket we can probably turn on requester pays and let anyone grab whatever they want from it. I'll file a separate ticket for that. |
It sounds like there isn't an appetite for deleting old data. That's ok – storage itself isn't really all that expensive. Implementing a content-addressable system would be great and would help reduce waste. I have an S3 inventory running right now that will tell us just how much data is duplicated. If deduping would save a huge amount of space I'll probably try to implement that sooner than later.
No, I don't think stuff in glacier is accessible with a standard Another option is to remove links to runs/sets older than 30 days. Maybe you have to login with GitHub to see the links? |
I would guess there would be a ton of duplicated data. It would be nice to keep the historical data, but I'm all for deleting duplicate data. Further wastage would be reduced by making http calls with |
I just ran a quick dedupe test on one of the recent S3 inventories and found that there are 3396619012547 bytes (3.4TB) in unique files and 3587042594660 bytes (3.5TB) in files that duplicate those unique files. So building a simple file dedupe file system would cut our storage bill in half (from ~$2.73/day to ~$1.37/day). That pretty good, but not as good as I thought it would be. |
In the interest of saving money and not having an infinitely growing S3 bucket, I'd like to discuss the idea of deleting runs that are old and no longer used.
Here's the rules I'm thinking:
Another configuration I could see would be a "backoff" where we keep X number of frequent runs, Y number of monthly runs, and Z number of yearly runs.
I'm curious what others think. Are there very compelling usecases for extremely old data that I'm not thinking of? Would this be really hard to implement given our current data archive model?
The text was updated successfully, but these errors were encountered: