Releases: deckhouse/prompp
Releases · deckhouse/prompp
v0.6.2
v0.6.1
Fixes
- Empty Block Creation Check. Added validation to prevent the creation of empty historical blocks during conversion under specific conditions.
- Handling of Corrupted Historical Blocks. Improved handling of corrupted or empty historical blocks to prevent service crashes.
- Startup Error Handling. Fixed an issue where errors occurring before the TSDB initialization could lead to a deadlock, requiring a manual process termination.
v0.6.0
Fixes
- Remove chunks data on convertion. Prompptool now remove chunks_data on convertion vanilla wal. This files may obtain a lot of mmapped memory in runtime.
Features
- Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag
unload_data_storage. - Omitting Out-of-Order StaleNaN Samples. Unlike vanilla Prometheus, Prom++ allows adding out-of-order samples and overwriting existing data when timestamps match. However, this behavior conflicts with the handling of StaleNaNs, which are sometimes intentionally written over existing data or with a delay to be automatically discarded if fresher data is available. Now, the mechanism for writing to past timestamps no longer applies to StaleNaNs.
Enhancements
- Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
- File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag
O_WRONLYafter reading. Also added a syscallfadviseto mark written and read pages as no longer needed. This reduces excessive caching. - Dependency Updates. Dependencies have been updated to mitigate CVEs.
v0.6.0-rc2
Fixes
- Remove chunks data on convertion. Prompptool now remove chunks_data on convertion vanilla wal. This files may obtain a lot of mmapped memory in runtime.
Features
- Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag
unload_data_storage.
Enhancements
- Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
- File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag
O_WRONLYafter reading. - Dependency Updates. Dependencies have been updated to mitigate CVEs.
v0.5.2
Fixes
- Flushing corrupted shard. On start all heads try to convert which include flushing buffered data to disk. It may led to crashin on start if there is a corrupted not persisted head.
v0.6.0-rc1
Features
- Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag
unload_data_storage.
Enhancements
- Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
- File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag
O_WRONLYafter reading. - Dependency Updates. Dependencies have been updated to mitigate CVEs.
v0.5.1
Fixes
- Incorrect Regex Part Caching. The matcher processing pipeline previously had legacy caching of regex parts based on pointer addresses, which led to incorrect behavior with certain regex patterns like
variant1|variant2|variant3. This caching had no impact on performance, and thus it was removed.
v0.5.0
- Base Prometheus version bumped to 2.55.1. It's unlock switch from Prometheus 3.x installations to Prom++.
- Update dependencies to mitigate CVEs.
- Fixing potential problems found with static analysis.
v0.5.0-rc2
- Base Prometheus version bumped to 2.55.1. It's unlock switch from Prometheus 3.x installations to Prom++.
- Update dependencies to mitigate CVEs.
v2.53.2-0.4.0
Fixes
- Use non-exclusive lock for head conversion. Conversion is long operation with disk writes. It is read-only for rotated head, so queries may be done in parallel.
Features
- Added feature flag
head_default_number_of_shardsto adjust the number of shards (default is 2). Increasing the number of shards improves write operations while potentially slightly slowing down read operations and increasing memory consumption. This feature flag is temporary and will be removed in favor of automatic shard count calculation in the future. - Introduced a two-stage process for series selection queries by matchers. The first stage parses the regular expression using prefix trees from the index, which executes quickly but requires locks on the index during its execution. The second stage handles posting operations, which are resource-intensive due to data decoding and set operations on series IDs. By separating these stages, write locking time is reduced and read parallelism is increased since posting operations can use lightweight snapshot states without blocking appends.
- Implemented optimistic non-exclusive relabeling locks for data updates. Since new series appear infrequently, if all data in a append operation is already cached in relabeling, that stage does not lock the series container or indexes. Exclusive locking only occurs when new data must be added. This mechanism works only when intra-shard parallelization is enabled (disabled by default).
- Added a mechanism for executing tasks on a specific shard instead of all shards. This capability is essential for upcoming performance improvements.
Enhancements
- Added metrics tracking the waiting time for locks and head rotations. These metrics improve observability of internal delays and contention, enabling better diagnostics and tuning opportunities.
- Moved lock management inside task execution rather than across the entire task duration depending on task type. This change can yield slight performance improvements when intra-shard parallelization is enabled by reducing unnecessary lock holding time.
- Small performance fixes. In several parts of code there are bytes to string conversions. In some places it was not safe. In all places it was not optimal.
- Eliminate head allocations in original TSDB. Prometheus TSDB used only as historical block querier and compactor. It is not necessary to allocate any buffers in it's head.