Skip to content

Latest commit

 

History

History
258 lines (203 loc) · 8.29 KB

README.md

File metadata and controls

258 lines (203 loc) · 8.29 KB

kotlin-retry

Maven Central CI License

badge badge badge badge badge badge badge badge badge badge badge badge badge badge

A multiplatform higher-order function for retrying operations that may fail.

val everySecondTenTimes = constantDelay(delayMillis = 1000L) + stopAtAttempts(10)

retry(everySecondTenTimes) {
    /* your code */
}

Installation

repositories {
    mavenCentral()
}

dependencies {
    implementation("com.michael-bull.kotlin-retry:kotlin-retry:2.0.1")
}

Introduction

IO operations often experience temporary failures that warrant re-execution, e.g. a database transaction that may fail due to a deadlock.12

“even if your application logic is correct, you must still handle the case where a transaction must be retried”

Deadlocks in InnoDB

The retry function simplifies this process by wrapping the application logic and applying a specified RetryPolicy.

In the example below, either of the calls to customers.nameFromId may fail, abandoning the remaining logic within the printExchangeBetween function. As such, we may want to retry this operation until 5 invocations in total have been executed:

import com.github.michaelbull.retry.policy.stopAtAttempts
import com.github.michaelbull.retry.retry
import kotlinx.coroutines.runBlocking

suspend fun printExchangeBetween(a: Long, b: Long) {
    val customer1 = customers.nameFromId(a)
    val customer2 = customers.nameFromId(b)
    println("$customer1 exchanged with $customer2")
}

val fiveTimes = stopAtAttempts<Throwable>(5)

fun main() = runBlocking {
    retry(fiveTimes) {
        printExchangeBetween(1L, 2L)
    }
}

We can also provide a RetryPolicy that only retries failures of a specific type. The example below will retry the operation only if the reason for failure was a SQLDataException, pausing for 20 milliseconds before retrying and stopping after 5 total invocations.

import com.github.michaelbull.retry.ContinueRetrying
import com.github.michaelbull.retry.StopRetrying
import com.github.michaelbull.retry.policy.RetryPolicy
import com.github.michaelbull.retry.policy.constantDelay
import com.github.michaelbull.retry.policy.continueIf
import com.github.michaelbull.retry.policy.plus
import com.github.michaelbull.retry.policy.stopAtAttempts
import com.github.michaelbull.retry.retry
import kotlinx.coroutines.runBlocking
import java.sql.SQLDataException

suspend fun printExchangeBetween(a: Long, b: Long) {
    val customer1 = customers.nameFromId(a)
    val customer2 = customers.nameFromId(b)
    println("$customer1 exchanged with $customer2")
}

val continueOnTimeout = continueIf<Throwable> { (failure) ->
    failure is SQLDataException
}

val timeoutsEverySecondFiveTimes = continueOnTimeout + constantDelay(1000) + stopAtAttempts(5)

fun main() = runBlocking {
    retry(timeoutsEverySecondFiveTimes) {
        printExchangeBetween(1L, 2L)
    }
}

Integration with kotlin-result

If the code you wish to try returns a Result<V, E> instead of throwing an Exception, add the following dependency for access to a Result-based retry function that shares the same policy code:

repositories {
    mavenCentral()
}

dependencies {
    implementation("com.michael-bull.kotlin-retry:kotlin-retry-result:2.0.1")
}

Usage:

import com.github.michaelbull.result.Result
import com.github.michaelbull.retry.policy.constantDelay
import com.github.michaelbull.retry.result.retry

fun somethingThatCanFail(): Result<Int, DomainError> = TODO()

val everyTwoSeconds = constantDelay<DomainError>(2000)

fun main() = runBlocking {
    val result: Result<Int, DomainError> = retry(everyTwoSeconds) {
        somethingThatCanFail()
    }
}

Backoff

The examples above retry executions immediately after they fail, however we may wish to spread out retries with an ever-increasing delay. This is known as a "backoff" and comes in many forms. This library includes all the forms of backoff strategy detailed the article by Marc Brooker on the AWS Architecture Blog entitled "Exponential Backoff And Jitter".

Binary Exponential

“exponential backoff means that clients multiply their backoff by a constant after each attempt, up to some maximum value”

sleep = min(cap, base * 2 ** attempt)
retry(binaryExponentialBackoff(min = 10L, max = 5000L)) {
    /* code */
}

Full Jitter

“trying to improve the performance of a system by adding randomness ... we want to spread out the spikes to an approximately constant rate”

sleep = random_between(0, min(cap, base * 2 ** attempt))
retry(fullJitterBackoff(min = 10L, max = 5000L)) {
    /* code */
}

Equal Jitter

“Equal Jitter, where we always keep some of the backoff and jitter by a smaller amount”

temp = min(cap, base * 2 ** attempt)
sleep = temp / 2 + random_between(0, temp / 2)
retry(equalJitterBackoff(min = 10L, max = 5000L)) {
    /* code */
}

Decorrelated Jitter

“Decorrelated Jitter, which is similar to “Full Jitter”, but we also increase the maximum jitter based on the last random value”

sleep = min(cap, random_between(base, sleep * 3))
retry(decorrelatedJitterBackoff(min = 10L, max = 5000L)) {
    /* code */
}

Inspiration

Contributing

Bug reports and pull requests are welcome on GitHub.

License

This project is available under the terms of the ISC license. See the LICENSE file for the copyright information and licensing terms.