You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The file contains many stretches of repeated values (mostly 0x00's). This is great for gzip, a byte-stream de-duplicatior.
If I gzip the file:
$ gzip -k data && ls -alsh data* 130 ↵
84K -rw-rw-r-- 1 wmeitzler wmeitzler 82K Sep 20 09:45 data
4.0K -rw-rw-r-- 1 wmeitzler wmeitzler 1.3K Sep 20 09:45 data.gz
I achieve file compression of 21x!
Note that this only really provides value when long stretches of adjacent datapoints are similar. If I populate a file with truly random values, rather than ascending, I can only compress from 82kb to 80kb. And I'm paying CPU time to achieve these small files. I suspect, but have not validated, that a meaningful amount of use cases for a TSDB will generate these adjacent-similarity series that would benefit from gzip compression.
Given that Go allows access to the streaming nature of gzip operations, I propose exploring the use of these functions in the reading and writing of data files.
The text was updated successfully, but these errors were encountered:
Given a naive write of sequential data:
The file contains many stretches of repeated values (mostly 0x00's). This is great for gzip, a byte-stream de-duplicatior.
If I gzip the file:
I achieve file compression of 21x!
Note that this only really provides value when long stretches of adjacent datapoints are similar. If I populate a file with truly random values, rather than ascending, I can only compress from 82kb to 80kb. And I'm paying CPU time to achieve these small files. I suspect, but have not validated, that a meaningful amount of use cases for a TSDB will generate these adjacent-similarity series that would benefit from gzip compression.
Given that Go allows access to the streaming nature of gzip operations, I propose exploring the use of these functions in the reading and writing of data files.
The text was updated successfully, but these errors were encountered: