This document covers the CLI syntax for spt, a Go-based tool for benchmarking S3-compatible storage using the SPT engine.
The command structure follows the docker CLI pattern (command subcommand [options]) for familiarity and clear separation of concerns.
spt run <workload>: Execute a benchmark test.spt verify: Validate nodes for distributed testing infrastructure readiness.spt status: Inspect live readiness and metrics snapshots for running nodes.spt results: (Stub — not yet implemented) Manage past benchmark results.spt version: Print build metadata (version, commit, build date).
These flags are available on all commands:
| Flag | Default | Description |
|---|---|---|
--debug |
false |
Run in debug mode (alias for --log-level debug) |
--log-level |
info |
Log level: debug, info, warn, error |
--log-file |
spt.log |
Log file path |
--log-append |
false |
Append to existing log file instead of overwriting |
On startup, spt loads environment variables from $HOME/.env and then from ./.env if present using the godotenv library. Existing OS environment variables are not overridden, and the local ./.env takes precedence over $HOME/.env for variables not already present in the OS environment.
You can use these variables to avoid repeating sensitive or commonly used parameters:
- S3 connection:
S3_ENDPOINTS(CSV) orS3_ENDPOINT(single),S3_ACCESS_KEY,S3_SECRET_KEY,S3_BUCKET - Authentication:
S3_AUTH_VERSION(set to2only for legacy targets; default4) - Hosts:
HOSTS(comma-separated list of[user@]host) - Workload:
THREADS(parallel client threads) - Docker:
SPT_SKIP_IMAGE_PULL(skip pulling the engine image) - Engine tuning:
SPT_SERVICE_THREADS(virtual-thread carrier parallelism) - RDMA:
SPT_RDMA_ENABLED,RDMA_LOCAL_IP,RDMA_DEVICE,RDMA_LOG_LEVEL,RDMA_THRESHOLD_BYTES,RDMA_TIMEOUT_MS,RDMA_FALLBACK_ENABLED
Variable expansion: use $VAR or ${VAR}. Command substitutions like $(pwd) are not supported; use $PWD instead.
Precedence: CLI flags > OS environment > ./.env > $HOME/.env > built-in defaults. For endpoints specifically: --endpoints > S3_ENDPOINTS > S3_ENDPOINT.
The run command executes a benchmark. Its structure is spt run <type> [options], where <type> is a mandatory argument specifying the workload.
| Type | Status | Description |
|---|---|---|
write |
Implemented | Create objects to measure ingest performance |
list |
Implemented | Enumerate existing objects and report listing throughput |
read |
Implemented | Read pre-existing objects to measure read performance |
mock |
Implemented | Exercise the CLI with in-memory drivers (no S3 required) |
tables |
Implemented | Benchmark S3 Tables (Iceberg) operations — see S3_TABLES.md |
mixed |
Planned | Test with a specified mix of read and write operations |
delete |
Planned | Measure object deletion performance |
Flags are grouped by function for clarity.
Required for S3 workloads, optional/ignored for mock.
| Flag | Short | Default | Description |
|---|---|---|---|
--endpoints |
-e |
(required) | One or more S3 endpoint URLs (comma-separated or repeatable) |
--access-key |
-a |
(required) | S3 access key credential |
--secret-key |
-s |
(required) | S3 secret key credential |
--bucket |
-b |
(required) | Target bucket to use for the test |
--prefix |
"" |
Optional object key prefix (list workload only) | |
--auth-version |
4 |
S3 signature version (2 or 4) |
|
--slice-endpoints |
false |
Partition endpoints across nodes in distributed runs |
| Flag | Short | Default | Description |
|---|---|---|---|
--threads |
-t |
1 |
Number of parallel client threads |
--object-size |
-o |
"" |
Size of each object (e.g., 1MB, 256KB, 4GB). Ignored for list |
--object-count |
-n |
0 |
Fixed number of objects to process |
--duration |
-d |
"" |
Fixed time duration (e.g., 5m, 1h) |
--seed-objects |
2500 |
Objects to pre-create for read benchmarks |
Typically specify either --object-count or --duration, not both.
| Flag | Short | Default | Description |
|---|---|---|---|
--cleanup |
false |
Delete created objects after the test completes | |
--generate-only |
false |
Generate the scenario file without running it | |
--auto-terminate-seconds |
0 |
Auto-terminate headless runs after N seconds (0 = unlimited) | |
--keep-scenario |
false |
Keep the scenario file after test completion | |
--force |
false |
Automatically resolve port conflicts without prompting | |
--api-port |
9999 |
SPT engine API port | |
--skip-image-pull |
false |
Use locally cached Docker image without pulling | |
--output-dir |
-O |
"" |
Local directory to save detailed SPT report files |
--service-threads |
0 |
Engine virtual-thread carrier parallelism (0 = JVM default max(2, cpus/4)) |
| Flag | Default | Description |
|---|---|---|
--auto-results |
true |
Automatically retrieve results artifacts at end of run |
--results-dir |
./results |
Directory to write retrieved results artifacts |
--label |
"" |
Label for output directory naming and step ID prefix (default: mt) |
--auto-results-debug |
false |
Enable verbose debug logs for results retrieval |
--shutdown-on-complete |
true |
Request /shutdown on all hosts after fetching results |
--shutdown-linger |
5 |
Seconds to wait for /status linger after /shutdown |
| Flag | Default | Description |
|---|---|---|
--test-hosts |
127.0.0.1 |
Comma-separated Docker hosts: [user@]host[,...] |
--min-hosts |
0 (all) |
Minimum hosts that must connect (0 = all must succeed) |
--attach-existing |
false |
Attach to pre-started worker nodes; spt still launches the entry node |
--network-mode |
host |
Docker network mode: host (required for RMI) or bridge |
--rmi-port-start |
40000 |
Starting port for RMI range |
--rmi-port-count |
10 |
Number of RMI ports to allocate |
See S3_RDMA.md for detailed documentation, architecture, and troubleshooting.
| Flag | Default | Description |
|---|---|---|
--use-rdma |
false |
Use RDMA-accelerated S3 driver (requires RDMA hardware) |
--rdma-local-ip |
"" |
Local RDMA interface IP address |
--rdma-threshold |
1MB |
Minimum object size for RDMA transfer (e.g., 0, 256KB, 4MB) |
--rdma-fallback |
false |
Fall back to HTTP if RDMA initialization fails |
--rdma-device |
auto |
RDMA device name or auto for auto-detection |
--rdma-log-level |
WARN |
RDMA native library log level |
--rdma-timeout-ms |
30000 |
RDMA operation timeout in milliseconds |
These flags apply only to the tables workload type. See S3_TABLES.md for detailed documentation.
| Flag | Default | Description |
|---|---|---|
--test-vector |
tps |
Test vector: tps, compaction, or catalog |
--table-bucket |
spt-tables |
S3 Table bucket name |
--namespace |
default |
Namespace within the table bucket |
--table-name |
spt-bench |
Table name (auto-suffixed with timestamp if default) |
--concurrent-writers |
10 |
Concurrent Iceberg commit threads |
--commit-freq-ms |
500 |
Target ms between commits per writer |
--target-file-size |
64MB |
Target Parquet file size |
--ingest-file-size |
100KB |
Small Parquet file size for compaction seed |
--total-ingest |
1GB |
Total data volume for compaction seed |
--namespace-count |
100 |
Namespaces to create for catalog test |
--tables-per-ns |
100 |
Tables per namespace for catalog test |
--read-concurrency |
10 |
Concurrent catalog readers |
--compaction-timeout |
4h |
Max wait for compaction to complete |
--no-provision |
false |
Skip table bucket/namespace/table creation |
By default, spt run launches an interactive TUI. Use --headless for CI or unattended runs.
| Flag | Default | Description |
|---|---|---|
--headless |
false |
Force headless (non-interactive) mode |
--minimal |
false |
Start TUI with only live stats panel visible |
--verbose |
false |
Show detailed Docker API calls and debug info (headless mode) |
--trace-file |
"" |
Save all output to a trace file |
--trace-append |
false |
Append to existing trace file instead of overwriting |
# Write 1024 objects at 1MB each with 8 threads, then clean up
spt run write \
--endpoints http://s3a:9000,http://s3b:9000 \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket benchmark-test \
--threads 8 \
--object-size 1MB \
--object-count 1024 \
--cleanup# Duration-based write: write for 5 minutes, then clean up
spt run write \
--endpoints https://s3.example.com \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket benchmark-test \
--threads 16 \
--object-size 1MB \
--duration 5m \
--cleanup# Seed 5000 objects, then read them for 5 minutes
spt run read \
--endpoints https://s3.example.com \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket benchmark-test \
--threads 16 \
--object-size 1MB \
--seed-objects 5000 \
--duration 5m \
--cleanupspt run list \
--endpoints https://s3.example.com \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket analytics-data \
--prefix logs/2025/09/ \
--threads 4 \
--auto-terminate-seconds 120Notes:
- If neither
--object-countnor--durationis provided, the scenario runs until stopped. Use--auto-terminate-secondsfor unattended runs. - The list workload does not modify storage, so
--cleanupand--object-sizeare unused.
Mock mode is useful for testing spt itself or for CI where an S3 endpoint may not be available:
# Simple mock test with duration
spt run mock --duration 30s
# Mock test with custom settings
spt run mock \
--threads 8 \
--object-size 512KB \
--object-count 1000See S3_TABLES.md for full documentation. Quick examples:
# TPS test: 10 concurrent Iceberg writers for 5 minutes
spt run tables \
--endpoint https://s3tables.us-east-1.amazonaws.com \
--access-key "$AWS_ACCESS_KEY_ID" \
--secret-key "$AWS_SECRET_ACCESS_KEY" \
--table-bucket my-bucket \
--test-vector tps \
--concurrent-writers 10 \
--duration 5m
# Catalog test: 100 namespaces x 100 tables, 5m read phase
spt run tables \
--endpoint https://s3tables.us-east-1.amazonaws.com \
--access-key "$AWS_ACCESS_KEY_ID" \
--secret-key "$AWS_SECRET_ACCESS_KEY" \
--table-bucket my-bucket \
--test-vector catalog \
--namespace-count 100 \
--tables-per-ns 100 \
--duration 5mspt run write \
--endpoints https://ecs.example.com \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket benchmark-test \
--threads 16 \
--object-size 4MB \
--duration 5m \
--use-rdma \
--rdma-local-ip 10.247.128.125 \
--rdma-threshold 1MBIf operators have already started SPT worker containers, spt can attach to those workers and only launch the entry node:
spt run write \
--test-hosts entry,worker1,worker2 \
--attach-existing \
--endpoints http://minio:9000 \
--access-key demo \
--secret-key demo123 \
--bucket perf-test \
--threads 8 \
--object-size 1MB \
--object-count 2000Notes:
- Workers must already expose the SPT API on 9999 and the RMI registry range on the standard ports.
sptstill launches and manages the entry node; worker containers remain untouched during shutdown.- The host list must include at least one worker; the tool enforces this when
--attach-existingis set.
spt run write \
--headless \
--endpoints https://s3.example.com \
--access-key "$S3_ACCESS_KEY" \
--secret-key "$S3_SECRET_KEY" \
--bucket ci-bench \
--threads 8 \
--object-size 1MB \
--duration 2m \
--auto-terminate-seconds 300 \
--verboseWhen you launch the interactive TUI (spt run without --headless), the Host column reflects the orchestrator's lifecycle phases:
- Red — the node is still pending; SSH/Docker contact has not succeeded yet or the host dropped offline.
- Blue — the node has been contacted, the container is starting, or
/readyis reachable, but metrics are not flowing yet. - Green — the node is responding to metrics polls and streaming data successfully.
If a node regresses (for example metrics parsing fails while /ready stays healthy), the indicator automatically drops from green back to blue so you can spot transient issues without mistaking them for a full outage.
The verify command validates that distributed testing infrastructure is properly configured and ready for coordinated SPT benchmarks.
Distributed SPT testing requires multiple nodes to be properly configured with:
- Docker installed and accessible
- Network connectivity between nodes
- Required ports available (RMI: 1099, REST API: 9999)
- Ability to run SPT containers in node mode
The verify command performs comprehensive pre-flight checks to ensure all nodes meet these requirements.
spt verify [--test-hosts <hosts>] [options]Note: If --test-hosts is not specified, spt first looks for a HOSTS environment variable (from OS or .env). If not set, it defaults to localhost (127.0.0.1).
| Flag | Default | Description |
|---|---|---|
--test-hosts |
"" (localhost) |
Comma-separated list of hosts to verify |
--min-hosts |
0 (all) |
Minimum number of hosts that must pass |
--network-mode |
host |
Docker network mode: host (required for RMI) or bridge |
--api-port |
9999 |
SPT API port to verify |
--rmi-port-start |
40000 |
Starting port for RMI range |
--rmi-port-count |
10 |
Number of RMI ports to verify |
--force-cleanup |
false |
Automatically clean up conflicting containers without prompting |
--use-rdma |
false |
Include RDMA hardware and configuration checks |
For each specified host, the verify command:
- Checks Connectivity — local Docker API access or remote SSH connectivity
- Validates Docker — daemon running, API accessible, version compatibility
- Detects Port Conflicts — checks ports 1099 and 9999, identifies existing SPT containers
- Starts Test Container — launches SPT in node mode, validates startup and port mapping
- Verifies Services — tests RMI Registry (1099) and REST API (9999)
- Performs Cleanup — removes test container, ensures clean state
# Verify localhost (default)
spt verify
# Verify three remote nodes
spt verify --test-hosts "root@node1,root@node2,root@node3"
# Partial cluster readiness (2 of 4 must pass)
spt verify --test-hosts "node1,node2,node3,node4" --min-hosts 2
# Automated cleanup of conflicts
spt verify --test-hosts "test1,test2" --force-cleanup
# Include RDMA hardware checks
spt verify --test-hosts "rdma1,rdma2" --use-rdma- READY: Minimum required nodes passed all checks
- NOT READY: Insufficient nodes passed verification
The status command provides a concise snapshot of the nodes participating in a run. It polls each host's SPT API (/ready, /health, /status, and /metrics/json) with short timeouts and prints readiness, run state, and the most recent metrics sample.
spt status [--test-hosts <hosts>] [--api-port <port>]| Flag | Default | Description |
|---|---|---|
--test-hosts |
"" (localhost) |
Comma-separated hosts to inspect |
--api-port |
9999 |
SPT REST API port to query |
$ spt status --test-hosts entry,worker1,worker2
Node status (port 9999)
- [entry] entry: READY (http 200, status=ready, node=entry-0)
run: state=RUNNING, run=run-123, progress=78.5%, message="Active test"
metrics: state=RUNNING, completion=78%, ops=1540/s, throughput=7.3MB/s
- [worker] worker1: READY (http 200, status=ready, node=worker-01)
metrics: state=RUNNING, completion=76%, ops=1520/s
- [worker] worker2: NOT READY (http 503, status=starting, node=worker-02)
warn: metrics probe failed: metrics/json status 503- Spot-check distributed runs from another terminal without streaming full logs.
- Confirm that pre-started workers in
--attach-existingworkflows remain healthy. - Quickly identify which node is lagging (e.g., stuck in
starting, no metrics).
| Feature | Status |
|---|---|
write workload |
Implemented |
list workload |
Implemented |
read workload |
Implemented |
mock workload |
Implemented |
tables workload (S3 Tables / Iceberg) |
Implemented |
verify command |
Implemented |
status command |
Implemented |
Post-test cleanup (--cleanup) |
Implemented |
| Auto-results retrieval | Implemented |
| RDMA acceleration | Implemented |
| TUI live dashboard | Implemented |
| Headless / CI mode | Implemented |
| Distributed multi-host orchestration | Implemented |
mixed workload |
Planned |
delete workload |
Planned |
results command |
Planned (stub exists) |