Skip to content

Latest commit

 

History

History
119 lines (95 loc) · 5.07 KB

README.md

File metadata and controls

119 lines (95 loc) · 5.07 KB

Maven Central

conottle

A Java concurrent API to throttle the maximum concurrency to process tasks for any given client while the total number of clients being serviced in parallel can also be throttled

  • conottle is short for concurrency throttle.

User story

As an API user, I want to execute tasks for any given client with a configurable maximum concurrency while the total number of clients being serviced in parallel can also be limited.

Prerequisite

Java 8 or better

Get it...

Maven Central

Install as a compile-scope dependency in Maven or other build tools alike.

Use it...

API

public interface ClientTaskExecutor {
    /**
     * @param command {@link Runnable} command to run asynchronously. All such commands under the same {@code clientId}
     *     are run in parallel, albeit throttled at a maximum concurrency.
     * @param clientId A key representing a client whose tasks are throttled while running in parallel
     * @return {@link Future} holding the run status of the {@code command}
     */
    default Future<Void> execute(Runnable command, Object clientId) {
        return submit(Executors.callable(command, null), clientId);
    }

    /**
     * @param task {@link Callable} task to run asynchronously. All such tasks under the same {@code clientId} are run
     *     in parallel, albeit throttled at a maximum concurrency.
     * @param clientId A key representing a client whose tasks are throttled while running in parallel
     * @param <V> Type of the task result
     * @return {@link Future} representing the result of the {@code task}
     */
    <V> Future<V> submit(Callable<V> task, Object clientId);
}

The interface uses Future as the return type, mainly to reduce conceptual weight of the API. The implementation actually returns CompletableFuture, and can be used/cast as such if need be.

Sample usage

import java.util.concurrent.Executors;

class submit {
    Conottle conottle = Conottle.builder()
            .maxClientsInParallel(100)
            .maxParallelismPerClient(4)
            .workerExecutorService(Executors.newCachedThreadPool())
            .build();

    @Test
    void customized() {
        int clientCount = 2;
        int clientTaskCount = 10;
        List<Future<Task>> futures = new ArrayList<>(); // class Task implements Callable<Task>
        int maxActiveExecutorCount = 0;
        for (int c = 0; c < clientCount; c++) {
            String clientId = "clientId-" + (c + 1);
            for (int t = 0; t < clientTaskCount; t++) {
                futures.add(this.conottle.submit(new Task(clientId + "-task-" + t, MIN_TASK_DURATION), clientId));
                maxActiveExecutorCount = Math.max(maxActiveExecutorCount, conottle.countActiveExecutors());
            }
        }
        assertEquals(clientCount, maxActiveExecutorCount, "should be 1:1 between a client and its executor");
        int taskTotal = futures.size();
        assertEquals(clientTaskCount * clientCount, taskTotal);
        int doneCount = 0;
        for (Future<Task> future : futures) {
            if (future.isDone()) {
                doneCount++;
            }
        }
        assertTrue(doneCount < futures.size());
        info.log("not all of the {} tasks were done immediately", taskTotal);
        info.atDebug().log("{} out of {} were done", doneCount, futures.size());
        for (Future<Task> future : futures) {
            await().until(future::isDone);
        }
        info.log("all of the {} tasks were done eventually", taskTotal);
        await().until(() -> this.conottle.countActiveExecutors() == 0);
        info.log("no active executor lingers when all tasks complete");
    }

    @AfterEach
    void close() {
        this.conottle.close();
    }
}

All builder parameters are optional:

  • maxParallelismPerClient is the maximum concurrency at which one single client's tasks can execute. If omitted or set to a non-positive integer, then the default is Runtime.getRuntime().availableProcessors().
  • maxClientsInParallel is the maximum number of clients that can be serviced in parallel. If omitted or set to a non-positive integer, then the default is Runtime.getRuntime().availableProcessors().
  • workerExecutorService is the global async thread pool to service all requests for all clients. If omitted, the default is a fork-join thread pool whose capacity is and Runtime.getRuntime().availableProcessors().

This API has no technical/programmatic upper limit on the parameter values to set for total number of parallelism or clients to be supported. Once set, the only limit is on runtime concurrency at any given moment: Before proceeding, excessive tasks or clients will have to wait for active ones to run for completion - that is, the throttling effect.