Skip to content

Commit 6298151

Browse files
docs: Fix WAL numbering (backport release-3.3.x) (#15903)
Co-authored-by: J Stickler <[email protected]>
1 parent 712f40b commit 6298151

File tree

1 file changed

+19
-23
lines changed
  • docs/sources/operations/storage

1 file changed

+19
-23
lines changed

docs/sources/operations/storage/wal.md

Lines changed: 19 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -17,18 +17,17 @@ This section will use Kubernetes as a reference deployment paradigm in the examp
1717

1818
The Write Ahead Log in Loki takes a few particular tradeoffs compared to other WALs you may be familiar with. The WAL aims to add additional durability guarantees, but _not at the expense of availability_. Particularly, there are two scenarios where the WAL sacrifices these guarantees.
1919

20-
1) Corruption/Deletion of the WAL prior to replaying it
20+
1. Corruption/Deletion of the WAL prior to replaying it
2121

22-
In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of its data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting.
22+
In the event the WAL is corrupted/partially deleted, Loki will not be able to recover all of its data. In this case, Loki will attempt to recover any data it can, but will not prevent Loki from starting.
2323

24-
You can use the Prometheus metric `loki_ingester_wal_corruptions_total` to track and alert when this happens.
24+
You can use the Prometheus metric `loki_ingester_wal_corruptions_total` to track and alert when this happens.
2525

26-
1) No space left on disk
26+
1. No space left on disk
2727

28-
In the event the underlying WAL disk is full, Loki will not fail incoming writes, but neither will it log them to the WAL. In this case, the persistence guarantees across process restarts will not hold.
29-
30-
You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` to track and alert when this happens.
28+
In the event the underlying WAL disk is full, Loki will not fail incoming writes, but neither will it log them to the WAL. In this case, the persistence guarantees across process restarts will not hold.
3129

30+
You can use the Prometheus metric `loki_ingester_wal_disk_full_failures_total` to track and alert when this happens.
3231

3332
### Backpressure
3433

@@ -47,18 +46,16 @@ The following metrics are available for monitoring the WAL:
4746

4847
1. Since ingesters need to have the same persistent volume across restarts/rollout, all the ingesters should be run on [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) with fixed volumes.
4948

50-
2. Following flags needs to be set
49+
1. Following flags needs to be set
5150
* `--ingester.wal-enabled` to `true` which enables writing to WAL during ingestion.
5251
* `--ingester.wal-dir` to the directory where the WAL data should be stored and/or recovered from. Note that this should be on the mounted volume.
5352
* `--ingester.checkpoint-duration` to the interval at which checkpoints should be created.
5453
* `--ingester.wal-replay-memory-ceiling` (default 4GB) may be set higher/lower depending on your resource settings. It handles memory pressure during WAL replays, allowing a WAL many times larger than available memory to be replayed. This is provided to minimize reconciliation time after very bad situations, i.e. an outage, and will likely not impact regular operations/rollouts _at all_. We suggest setting this to a high percentage (~75%) of available memory.
5554

5655
## Changes in lifecycle when WAL is enabled
5756

58-
5957
Flushing of data to chunk store during rollouts or scale down is disabled. This is because during a rollout of statefulset there are no ingesters that are simultaneously leaving and joining, rather the same ingester is shut down and brought back again with updated config. Hence flushing is skipped and the data is recovered from the WAL. If you need to ensure that data is always flushed to the chunk store when your pod shuts down, you can set the `--ingester.flush-on-shutdown` flag to `true`.
6058

61-
6259
## Disk space requirements
6360

6461
Based on tests in real world:
@@ -67,7 +64,7 @@ Based on tests in real world:
6764
* Checkpoint period was 5mins.
6865
* disk utilization on a WAL-only disk was steady at ~10-15GB.
6966

70-
You should not target 100% disk utilisation.
67+
You should not target 100% disk utilization.
7168

7269
## Migrating from stateless deployments
7370

@@ -76,17 +73,17 @@ The ingester _Deployment without WAL_ and _StatefulSet with WAL_ should be scale
7673
Let's take an example of 4 ingesters. The migration would look something like this:
7774

7875
1. Bring up one stateful ingester `ingester-0` and wait until it's ready (accepting read and write requests).
79-
2. Scale down the old ingester deployment to 3 and wait until the leaving ingester flushes all the data to chunk store.
80-
3. Once that ingester has disappeared from `kc get pods ...`, add another stateful ingester and wait until it's ready. Now you have `ingester-0` and `ingester-1`.
81-
4. Repeat step 2 to reduce remove another ingester from old deployment.
82-
5. Repeat step 3 to add another stateful ingester. Now you have `ingester-0 ingester-1 ingester-2`.
83-
6. Repeat step 4 and 5, and now you will finally have `ingester-0 ingester-1 ingester-2 ingester-3`.
76+
1. Scale down the old ingester deployment to 3 and wait until the leaving ingester flushes all the data to chunk store.
77+
1. Once that ingester has disappeared from `kc get pods ...`, add another stateful ingester and wait until it's ready. Now you have `ingester-0` and `ingester-1`.
78+
1. Repeat step 2 to reduce remove another ingester from old deployment.
79+
1. Repeat step 3 to add another stateful ingester. Now you have `ingester-0 ingester-1 ingester-2`.
80+
1. Repeat step 4 and 5, and now you will finally have `ingester-0 ingester-1 ingester-2 ingester-3`.
8481

8582
## How to scale up/down
8683

8784
### Scale up
8885

89-
Scaling up is same as what you would do without WAL or statefulsets. Nothing to change here.
86+
Scaling up is same as what you would do without WAL or StatefulSets. Nothing to change here.
9087

9188
### Scale down
9289

@@ -100,12 +97,11 @@ After hitting the endpoint for `ingester-2 ingester-3`, scale down the ingesters
10097

10198
Also you can set the `--ingester.flush-on-shutdown` flag to `true`. This enables chunks to be flushed to long-term storage when the ingester is shut down.
10299

103-
104100
## Additional notes
105101

106102
### Kubernetes hacking
107103

108-
Statefulsets are significantly more cumbersome to work with, upgrade, and so on. Much of this stems from immutable fields on the specification. For example, if one wants to start using the WAL with single store Loki and wants separate volume mounts for the WAL and the boltdb-shipper, you may see immutability errors when attempting updates the Kubernetes statefulsets.
104+
StatefulSets are significantly more cumbersome to work with, upgrade, and so on. Much of this stems from immutable fields on the specification. For example, if one wants to start using the WAL with single store Loki and wants separate volume mounts for the WAL and the boltdb-shipper, you may see immutability errors when attempting updates the Kubernetes StatefulSets.
109105

110106
In this case, try `kubectl -n <namespace> delete sts ingester --cascade=false`.
111107
This will leave the Pods alive but delete the StatefulSet.
@@ -115,16 +111,16 @@ Then you may recreate the (updated) StatefulSet and one-by-one start deleting th
115111

116112
1. **StatefulSets for Ordered Scaling Down**: The Loki ingesters should be scaled down one by one, which is efficiently handled by Kubernetes StatefulSets. This ensures an ordered and reliable scaling process, as described in the [Deployment and Scaling Guarantees](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees) documentation.
117113

118-
2. **Using PreStop Lifecycle Hook**: During the Pod scaling down process, the PreStop [lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) triggers the `/flush_shutdown` endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion.
114+
1. **Using PreStop Lifecycle Hook**: During the Pod scaling down process, the PreStop [lifecycle hook](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) triggers the `/flush_shutdown` endpoint on the ingester. This action flushes the chunks and removes the ingester from the ring, allowing it to register as unready and become eligible for deletion.
119115

120-
3. **Using terminationGracePeriodSeconds**: Provides time for the ingester to flush its data before being deleted, if flushing data takes more than 30 minutes, you may need to increase it.
116+
1. **Using terminationGracePeriodSeconds**: Provides time for the ingester to flush its data before being deleted, if flushing data takes more than 30 minutes, you may need to increase it.
121117

122-
4. **Cleaning Persistent Volumes**: Persistent volumes are automatically cleaned up by leveraging the [enableStatefulSetAutoDeletePVC](https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/) feature in Kubernetes.
118+
1. **Cleaning Persistent Volumes**: Persistent volumes are automatically cleaned up by leveraging the [enableStatefulSetAutoDeletePVC](https://kubernetes.io/blog/2021/12/16/kubernetes-1-23-statefulset-pvc-auto-deletion/) feature in Kubernetes.
123119

124120
By following the above steps, you can ensure a smooth scaling down process for the Loki ingesters while maintaining data integrity and minimizing potential disruptions.
125121

126122
### Non-Kubernetes or baremetal deployments
127123

128124
* When the ingester restarts for any reason (upgrade, crash, etc), it should be able to attach to the same volume in order to recover back the WAL and tokens.
129125
* 2 ingesters should not be working with the same volume/directory for the WAL.
130-
* A rollout should bring down an ingester completely and then start the new ingester, not the other way around.
126+
* A rollout should bring down an ingester completely and then start the new ingester, not the other way around.

0 commit comments

Comments
 (0)