Skip to content

Releases: deckhouse/prompp

v0.6.2

24 Oct 13:46
v0.6.2
0f6c5bb

Choose a tag to compare

Fixes

  1. Head Status Update During Rotation. Fixed an issue where the head status could remain active if storage.tsdb.retention was set to zero, such as when running in agent mode. This caused the RemoteWrite loop does not transit to the next head.

v0.6.1

21 Oct 09:48
v0.6.1

Choose a tag to compare

Fixes

  1. Empty Block Creation Check. Added validation to prevent the creation of empty historical blocks during conversion under specific conditions.
  2. Handling of Corrupted Historical Blocks. Improved handling of corrupted or empty historical blocks to prevent service crashes.
  3. Startup Error Handling. Fixed an issue where errors occurring before the TSDB initialization could lead to a deadlock, requiring a manual process termination.

v0.6.0

10 Oct 09:44
v0.6.0
1251b44

Choose a tag to compare

Fixes

  1. Remove chunks data on convertion. Prompptool now remove chunks_data on convertion vanilla wal. This files may obtain a lot of mmapped memory in runtime.

Features

  1. Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag unload_data_storage.
  2. Omitting Out-of-Order StaleNaN Samples. Unlike vanilla Prometheus, Prom++ allows adding out-of-order samples and overwriting existing data when timestamps match. However, this behavior conflicts with the handling of StaleNaNs, which are sometimes intentionally written over existing data or with a delay to be automatically discarded if fresher data is available. Now, the mechanism for writing to past timestamps no longer applies to StaleNaNs.

Enhancements

  1. Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
  2. File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag O_WRONLY after reading. Also added a syscall fadvise to mark written and read pages as no longer needed. This reduces excessive caching.
  3. Dependency Updates. Dependencies have been updated to mitigate CVEs.

v0.6.0-rc2

06 Oct 17:09
v0.6.0-rc2

Choose a tag to compare

v0.6.0-rc2 Pre-release
Pre-release

Fixes

  1. Remove chunks data on convertion. Prompptool now remove chunks_data on convertion vanilla wal. This files may obtain a lot of mmapped memory in runtime.

Features

  1. Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag unload_data_storage.

Enhancements

  1. Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
  2. File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag O_WRONLY after reading.
  3. Dependency Updates. Dependencies have been updated to mitigate CVEs.

v0.5.2

06 Oct 16:51
v0.5.2

Choose a tag to compare

Fixes

  1. Flushing corrupted shard. On start all heads try to convert which include flushing buffered data to disk. It may led to crashin on start if there is a corrupted not persisted head.

v0.6.0-rc1

29 Sep 10:25
v0.6.0-rc1
110cc35

Choose a tag to compare

v0.6.0-rc1 Pre-release
Pre-release

Features

  1. Unused Data Unloading. In most cases, queries touch only 6–8% of all series in TSDB. Other series can be unloaded to disk and loaded on demand. This feature can save up to 20% of RAM utilization and does not have a visible impact on querying unloaded series. If a series is queried by rules, it will not be unloaded. This feature is disabled by default and can be activated with the feature flag unload_data_storage.

Enhancements

  1. Scrape Parser Optimization. A double pass process was used for scraped data: parsing and then reading parsed data with sharding samples. This allowed parsing the text once and quickly reading samples in all shards in parallel. However, it used a substantial amount of memory due to the intermediate state of parsed samples based on the source bytes buffer. In this version, new compression algorithms have been added, reducing the memory requirement by up to 10%.
  2. File Caches Reduction. WAL files are read once and then written to only. To reduce cache pages in memory, the files are reopened with the flag O_WRONLY after reading.
  3. Dependency Updates. Dependencies have been updated to mitigate CVEs.

v0.5.1

29 Sep 10:22
77d86be

Choose a tag to compare

Fixes

  1. Incorrect Regex Part Caching. The matcher processing pipeline previously had legacy caching of regex parts based on pointer addresses, which led to incorrect behavior with certain regex patterns like variant1|variant2|variant3. This caching had no impact on performance, and thus it was removed.

v0.5.0

25 Aug 15:46
v0.5.0
3e01276

Choose a tag to compare

  1. Base Prometheus version bumped to 2.55.1. It's unlock switch from Prometheus 3.x installations to Prom++.
  2. Update dependencies to mitigate CVEs.
  3. Fixing potential problems found with static analysis.

v0.5.0-rc2

08 Aug 14:28
v0.5.0-rc2
4e893ec

Choose a tag to compare

v0.5.0-rc2 Pre-release
Pre-release
  1. Base Prometheus version bumped to 2.55.1. It's unlock switch from Prometheus 3.x installations to Prom++.
  2. Update dependencies to mitigate CVEs.

v2.53.2-0.4.0

07 Aug 11:04
v2.53.2-0.4.0
9121116

Choose a tag to compare

Fixes

  1. Use non-exclusive lock for head conversion. Conversion is long operation with disk writes. It is read-only for rotated head, so queries may be done in parallel.

Features

  1. Added feature flag head_default_number_of_shards to adjust the number of shards (default is 2). Increasing the number of shards improves write operations while potentially slightly slowing down read operations and increasing memory consumption. This feature flag is temporary and will be removed in favor of automatic shard count calculation in the future.
  2. Introduced a two-stage process for series selection queries by matchers. The first stage parses the regular expression using prefix trees from the index, which executes quickly but requires locks on the index during its execution. The second stage handles posting operations, which are resource-intensive due to data decoding and set operations on series IDs. By separating these stages, write locking time is reduced and read parallelism is increased since posting operations can use lightweight snapshot states without blocking appends.
  3. Implemented optimistic non-exclusive relabeling locks for data updates. Since new series appear infrequently, if all data in a append operation is already cached in relabeling, that stage does not lock the series container or indexes. Exclusive locking only occurs when new data must be added. This mechanism works only when intra-shard parallelization is enabled (disabled by default).
  4. Added a mechanism for executing tasks on a specific shard instead of all shards. This capability is essential for upcoming performance improvements.

Enhancements

  1. Added metrics tracking the waiting time for locks and head rotations. These metrics improve observability of internal delays and contention, enabling better diagnostics and tuning opportunities.
  2. Moved lock management inside task execution rather than across the entire task duration depending on task type. This change can yield slight performance improvements when intra-shard parallelization is enabled by reducing unnecessary lock holding time.
  3. Small performance fixes. In several parts of code there are bytes to string conversions. In some places it was not safe. In all places it was not optimal.
  4. Eliminate head allocations in original TSDB. Prometheus TSDB used only as historical block querier and compactor. It is not necessary to allocate any buffers in it's head.