Import and update existing scenario for the Unyt app#512
Import and update existing scenario for the Unyt app#512
Conversation
WalkthroughThis PR imports and adapts an existing Unyt scenario for Wind Tunnel, adding a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
The following will be added to the changelog [0.6.1] - 2026-02-12Features
CI
Refactor
Documentation
|
d809257 to
73c64b6
Compare
| } | ||
|
|
||
| /// Allows, fine-grained app installation which gives you more control over what is installed. | ||
| pub fn install_app_custom<SV>( |
There was a problem hiding this comment.
This is required by the Unyt scenario but it feels okay to make it common so that future scenarios can use it? If we keep this I should add more doc-comments.
73c64b6 to
9307315
Compare
| } | ||
|
|
||
| /// Allows, fine-grained app installation which gives you more control over what is installed. | ||
| pub fn install_app_custom<SV>( |
There was a problem hiding this comment.
The original function from Unyt can be found: here, it was modifying the DNA properties and then re-building the hApp but, AFAIK, this can be achieved with the roles_settings field in the InstallAppPayload struct?
6440ba8 to
a7bb995
Compare
| }; | ||
| use serde_json::Value; | ||
|
|
||
| // todo: move to rave_engine |
There was a problem hiding this comment.
Check if this is done yet.
There was a problem hiding this comment.
It doesn't appear to have been added.
| @@ -0,0 +1,36 @@ | |||
| [package] | |||
| name = "unyt_chain_transaction" | |||
There was a problem hiding this comment.
Does this name make sense? I took the idea from the README
edf16fb to
f755028
Compare
f755028 to
e1a5a55
Compare
e1a5a55 to
e84aa35
Compare
e84aa35 to
23c42ce
Compare
|
✔️ 456e315...23c42ce - Conventional commits check succeeded. |
There was a problem hiding this comment.
Actionable comments posted: 12
🤖 Fix all issues with AI agents
In @.github/workflows/nomad-common.yaml:
- Line 69: The REQUIRED_NODES jq calculation in
.github/workflows/nomad-common.yaml is using .behaviours which doesn’t exist in
our scenario JSON; update the expression that computes REQUIRED_NODES to
reference .assignments instead of .behaviours (i.e., change the jq selector used
to compute REQUIRED_NODES to use .assignments // 1 | length) so scenarios like
unyt_chain_transaction correctly count 2 required nodes.
In `@scenarios/unyt_chain_transaction/README.md`:
- Line 43: The README entry uses an inconsistent separator
(`wt.custom.final:ledger_state`)—update the documented metric name to match the
actual key emitted by the code: change the colon to a dot and ensure the suffix
matches ReportMetric::new("ledger_state") (e.g., document the metric as
`wt.custom.final.ledger_state` or, if the code emits only the base key, document
it as `ledger_state`) so the README aligns with the keys created by
ReportMetric::new("ledger_state").
In `@scenarios/unyt_chain_transaction/src/behaviour/initiate_network.rs`:
- Around line 72-73: The log text and the actual sleep duration disagree: the
log::info! call says "10 seconds" while thread::sleep(Duration::from_secs(20))
sleeps 20s; fix by making them consistent—either change the log::info! message
to "pausing for 20 seconds..." to match Duration::from_secs(20) or change
Duration::from_secs(20) to Duration::from_secs(10) so it matches the message;
update the code around the log::info! and thread::sleep calls (referencing
log::info!, thread::sleep, and Duration::from_secs(20)) accordingly, or better
yet extract the duration into a constant (e.g., PAUSE_SECS) and use that in both
the log and thread::sleep for clarity.
In `@scenarios/unyt_chain_transaction/src/behaviour/smart_agreements.rs`:
- Around line 71-89: The code currently passes all attached_transactions into
ctx.unyt_execute_saved (SAVEDExecuteInputs.links) without applying the env-var
limit number_of_links_processed and has a stray incomplete comment; update the
block handling TransactionDetails::GroupedParked so that you (1) remove/fix the
incomplete comment, and (2) truncate links to at most number_of_links_processed
before constructing SAVEDExecuteInputs (e.g., take the first N entries of
attached_transactions using number_of_links_processed) so unyt_execute_saved
receives the limited set; keep use of ea_id/request.id and other fields
unchanged.
- Around line 120-121: The comment indicates a mismatch: the code in the
smart_agreements behaviour that calculates amount_per_agent uses
Fraction::new(98, 100) which applies a 2% fee but the intent is 1% fees; update
the calculation that modifies amount_per_agent (the line using Fraction::new(98,
100)?) to use Fraction::new(99, 100)? so it retains 99% (1% fee) and keep the
same error handling/unwrap pattern already in place.
In `@scenarios/unyt_chain_transaction/src/behaviour/spend.rs`:
- Around line 17-22: The code calls ctx.unyt_get_ledger() (and logs
ctx.get().cell_id().agent_pubkey()) before the network-initialized gate, which
can cause an early zome call failure; move the ledger fetch and the associated
log statement so they run after the existing initialization check/wait-for-init
block, or wrap the call in a defensive check that ensures the network is
initialized before invoking unyt_get_ledger() (i.e., relocate the
unyt_get_ledger() call to after the wait-for-init logic or guard it with the
same initialization predicate used in the existing wait-for-init code).
- Around line 85-91: The code in spend.rs computes a Fraction from
participating_agents.len() and divides by it, which will panic or error if
participating_agents is empty; update the spend behaviour (around the
participating_agents, spendable_amount, Fraction::new(...) and amount_per_agent
computation) to explicitly handle the empty case before creating the Fraction —
e.g., if participating_agents.is_empty() then return early or propagate a clear
error/result (matching how smart_agreements.rs handles empties), otherwise
proceed to create Fraction::new(len as i64, 1) and compute amount_per_agent;
ensure any early-return uses the same error/ok conventions as the surrounding
function.
In `@scenarios/unyt_chain_transaction/src/durable_object.rs`:
- Around line 34-41: DurableObject::new currently hardcodes base_url and secret
(base_url and secret fields); change it to read these from environment variables
(e.g., getenv/var retrieval) with sensible defaults for local dev, mirroring how
NUMBER_OF_LINKS_TO_PROCESS is handled elsewhere in this scenario; update
DurableObject::new to fallback to a safe default base_url and to not embed
secrets in source—load secret from an env var and fail fast or return an error
if it is required in non-dev contexts.
- Around line 96-129: The unbounded loop inside the execute_in_place async
closure (the loop that calls self.client.get(&url).send() and inspects
response.status() to return Ok(get_response.value)) can spin forever on 404; add
a maximum retry count and/or overall timeout for obtaining the progenitor key
(use a counter like max_retries and increment per 404, or wrap with a
tokio::time::timeout) and return an error via anyhow::bail! when the limit
elapses; keep existing 2s sleep/backoff behavior between retries and include
run_id and url in the error message for diagnostics.
In `@scenarios/unyt_chain_transaction/src/main.rs`:
- Around line 37-97: The teardown closure currently uses ctx.unyt_get_ledger()?
which early-returns on error and skips cleanup; change the metrics collection to
be defensive (use ctx.unyt_get_ledger().ok() or match/if let Ok(...) to collect
what you can) and ensure uninstall_app(ctx, None) in the same closure always
runs regardless of metric collection errors; capture the metric-collection
Result (or Option) and then call uninstall_app before returning, propagating the
original metric error afterwards if needed so uninstall_app is not skipped
(refer to ctx.unyt_get_ledger, ctx.unyt_get_actionable_transactions,
ctx.unyt_get_completed_transactions and uninstall_app).
In `@scenarios/unyt_chain_transaction/src/unyt_agent.rs`:
- Around line 127-148: The variable unique_agents in collect_agents is populated
from unyt_get_code_templates_lib() without deduplication so authors can appear
multiple times; change the logic that builds unique_agents (in collect_agents)
to deduplicate before assigning to
self.get_mut().scenario_values.participating_agents — e.g., collect into a
HashSet/BTreeSet or sort+dedup the Vec of template.author, then remove self
(self_key) and progenitor (from DurableObject::new().get_progenitor_key(self))
from that deduplicated set and assign the resulting unique list back to
participating_agents.
- Around line 93-97: The call in UnytAgentExt::unyt_init is deserializing the
zome init return as String which is incorrect; update the type parameter on
self.call_zome_alliance (in unyt_init) to the real init return type (e.g.
InitCallbackResult) so it matches the transactor zome's init signature, or if
you truly don't need the return value change it to call_zome_alliance::<_,
()>("init", ()) to ignore the result; adjust the type used in the let binding
accordingly.
🧹 Nitpick comments (5)
bindings/runner/src/common.rs (1)
181-228: The debug log on line 218 lost theapp_pathcontext.The log message
"Installing app with source"no longer includes the path being installed from, which reduces debuggability. Consider including the path:Suggested fix
- log::debug!("Installing app with source"); + log::debug!("Installing app from: {}", app_path.display());scenarios/unyt_chain_transaction/Cargo.toml (1)
16-18: Consider promotingreqwest,rmp-serde, andsemverto workspace dependencies.These are pinned locally rather than using workspace-level dependency management. If other scenarios might use them, or for consistency, they could be added to the root
Cargo.toml[workspace.dependencies]. Not blocking if these are intentionally scoped to this scenario only.scenarios/unyt_chain_transaction/src/behaviour/initiate_network.rs (1)
30-30: Typo: "longger" → "longer".- let days = 30; // if test run longger than this days this will need to be updated + let days = 30; // if test runs longer than this, this will need to be updatedscenarios/unyt_chain_transaction/src/behaviour/spend.rs (1)
59-64: Silently discarding errors fromunyt_accept_transaction.
let _ = ctx.unyt_accept_transaction(...)silently ignores any errors. While individual transaction acceptance failures may be tolerable, at minimum a warning log would help with debugging scenario issues.Proposed change
for transaction in actionable_transactions.commitment_actionable { - let _ = ctx.unyt_accept_transaction(AcceptTx { + if let Err(e) = ctx.unyt_accept_transaction(AcceptTx { address: transaction.id.clone().into(), service_network_definition: None, - }); + }) { + log::warn!("Failed to accept transaction: {:?}", e); + } }scenarios/unyt_chain_transaction/src/behaviour/smart_agreements.rs (1)
60-63: Silently discarding errors fromunyt_collect_from_savedandunyt_execute_saved.Lines 62 and 82 use
let _ =to ignore results fromunyt_collect_from_savedandunyt_execute_saved. At least log warnings on failure for debuggability.
| - full_arc_create_validated_zero_arc_read | ||
| - mixed_arc_get_agent_activity | ||
| - mixed_arc_must_get_agent_activity | ||
| - unyt_chain_transaction |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the jq expression against the actual JSON
echo "=== Checking unyt_chain_transaction.json files ==="
fd "unyt_chain_transaction.json" nomad/job-variants 2>/dev/null || echo "No matches found"
echo ""
echo "=== Listing job-variants directory structure ==="
ls -la nomad/job-variants 2>/dev/null || echo "Directory not found"
echo ""
echo "=== Finding all JSON files in job-variants ==="
find nomad/job-variants -type f -name "*.json" 2>/dev/null | head -20
echo ""
echo "=== Check canonical directory structure ==="
ls -la nomad/job-variants/canonical 2>/dev/null || echo "Directory not found"Repository: holochain/wind-tunnel
Length of output: 2036
🏁 Script executed:
#!/bin/bash
echo "=== canonical/unyt_chain_transaction.json ==="
cat nomad/job-variants/canonical/unyt_chain_transaction.json
echo ""
echo "=== demo/vars/unyt_chain_transaction.json ==="
cat nomad/job-variants/demo/vars/unyt_chain_transaction.json
echo ""
echo "=== Testing jq expression on demo variant ==="
jq '.behaviours // 1 | length' nomad/job-variants/demo/vars/unyt_chain_transaction.json
echo ""
echo "=== Checking what top-level keys exist in demo variant ==="
jq 'keys' nomad/job-variants/demo/vars/unyt_chain_transaction.json
echo ""
echo "=== Sample check on other scenarios ==="
jq 'keys' nomad/job-variants/demo/vars/dht_sync_lag.jsonRepository: holochain/wind-tunnel
Length of output: 849
🏁 Script executed:
#!/bin/bash
echo "=== Extracting the relevant workflow section ==="
sed -n '80,95p' .github/workflows/nomad-common.yaml
echo ""
echo "=== Check if any scenarios actually have a behaviours field at top level ==="
find nomad/job-variants -name "*.json" -type f -exec sh -c 'jq "has(\"behaviours\")" "$1" | grep -q "true" && echo "$1"' _ {} \;
echo ""
echo "=== List of all scenario JSON files with their structure ==="
find nomad/job-variants -name "*.json" -type f ! -name "unyt_chain_transaction.json" | head -5 | xargs -I {} sh -c 'echo "=== {} ==="; jq "{has_behaviours: has(\"behaviours\"), has_assignments: has(\"assignments\"), has_min_agents: has(\"min_agents\")}" {}'Repository: holochain/wind-tunnel
Length of output: 1798
🏁 Script executed:
#!/bin/bash
echo "=== Testing different jq expressions on unyt_chain_transaction ==="
FILE="nomad/job-variants/demo/vars/unyt_chain_transaction.json"
echo "Current (broken): .behaviours // 1 | length"
jq '.behaviours // 1 | length' "$FILE"
echo ""
echo "Possible fix 1: .assignments | length"
jq '.assignments | length' "$FILE"
echo ""
echo "Possible fix 2: .assignments // 1 | length"
jq '.assignments // 1 | length' "$FILE"
echo ""
echo "=== Checking if .assignments | length makes sense for all scenarios ==="
find nomad/job-variants -path "*demo/vars/*.json" -type f | while read f; do
echo "$(basename "$f"): $(jq '.assignments | length' "$f" 2>/dev/null || echo 'null')"
doneRepository: holochain/wind-tunnel
Length of output: 1027
🏁 Script executed:
#!/bin/bash
echo "=== Checking if nomad-common.yaml is modified in this PR ==="
git diff --name-only HEAD~1..HEAD | grep -i nomad-common
echo ""
echo "=== Current line 86 context (the jq expression) ==="
sed -n '80,90p' .github/workflows/nomad-common.yamlRepository: holochain/wind-tunnel
Length of output: 1076
Fix the workflow's REQUIRED_NODES calculation to use .assignments instead of .behaviours.
The workflow (line 86 of .github/workflows/nomad-common.yaml) parses REQUIRED_NODES using .behaviours // 1 | length, but all scenario JSON files use .assignments instead. No scenario has a .behaviours field, causing REQUIRED_NODES to always default to 1. For unyt_chain_transaction, this should be 2 (from the 2 assignments in the JSON), not 1. This bug affects all scenarios in the workflow and will cause Nomad scheduling failures when insufficient nodes are available. Change line 86 to:
'.assignments // 1 | length'This bug is pre-existing and not introduced by this PR, but adding unyt_chain_transaction to the matrix exposes it for this scenario.
🤖 Prompt for AI Agents
In @.github/workflows/nomad-common.yaml at line 69, The REQUIRED_NODES jq
calculation in .github/workflows/nomad-common.yaml is using .behaviours which
doesn’t exist in our scenario JSON; update the expression that computes
REQUIRED_NODES to reference .assignments instead of .behaviours (i.e., change
the jq selector used to compute REQUIRED_NODES to use .assignments // 1 |
length) so scenarios like unyt_chain_transaction correctly count 2 required
nodes.
| The scenario records several custom metrics: | ||
|
|
||
| - `wt.custom.global_definition_propagation_time`: Records the time at which the global definition becomes readable for each agent, helping measure network initialization propagation speed | ||
| - `wt.custom.final:ledger_state`: Captures the final state of the ledger at scenario teardown for analysis |
There was a problem hiding this comment.
Typo: inconsistent metric name separator.
The metric name wt.custom.final:ledger_state uses a colon, while the next metric on line 44 uses a dot (wt.custom.final.history). Looking at main.rs, the actual metric name used in code is ReportMetric::new("ledger_state"), so neither prefix exactly matches. Consider aligning the documented names with the actual metric keys emitted in code.
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/README.md` at line 43, The README entry uses
an inconsistent separator (`wt.custom.final:ledger_state`)—update the documented
metric name to match the actual key emitted by the code: change the colon to a
dot and ensure the suffix matches ReportMetric::new("ledger_state") (e.g.,
document the metric as `wt.custom.final.ledger_state` or, if the code emits only
the base key, document it as `ledger_state`) so the README aligns with the keys
created by ReportMetric::new("ledger_state").
There was a problem hiding this comment.
Agreed, I was about to comment on the same thing
There was a problem hiding this comment.
@ThetaSinner Thank you for confirming! Feel free to update the metric names in the README to use consistent dot separators (e.g., wt.custom.final.ledger_state) to match the convention used on line 44 and align with the actual metric keys in the code. Let me know if you'd like me to help verify the correct metric names from the implementation.
| log::info!("Network is already initialized, pausing for 10 seconds..."); | ||
| thread::sleep(Duration::from_secs(20)); |
There was a problem hiding this comment.
Log message says "10 seconds" but thread::sleep is 20 seconds.
- log::info!("Network is already initialized, pausing for 10 seconds...");
- thread::sleep(Duration::from_secs(20));
+ log::info!("Network is already initialized, pausing for 20 seconds...");
+ thread::sleep(Duration::from_secs(20));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log::info!("Network is already initialized, pausing for 10 seconds..."); | |
| thread::sleep(Duration::from_secs(20)); | |
| log::info!("Network is already initialized, pausing for 20 seconds..."); | |
| thread::sleep(Duration::from_secs(20)); |
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/behaviour/initiate_network.rs` around
lines 72 - 73, The log text and the actual sleep duration disagree: the
log::info! call says "10 seconds" while thread::sleep(Duration::from_secs(20))
sleeps 20s; fix by making them consistent—either change the log::info! message
to "pausing for 20 seconds..." to match Duration::from_secs(20) or change
Duration::from_secs(20) to Duration::from_secs(10) so it matches the message;
update the code around the log::info! and thread::sleep calls (referencing
log::info!, thread::sleep, and Duration::from_secs(20)) accordingly, or better
yet extract the duration into a constant (e.g., PAUSE_SECS) and use that in both
the log and thread::sleep for clarity.
| for request in requests { | ||
| // select number of links and pass only NUMBER_OF_LINKS_PROCESSED links | ||
| // Check the | ||
| if let TransactionDetails::GroupedParked { | ||
| attached_transactions, | ||
| .. | ||
| } = request.details | ||
| { | ||
| let links = attached_transactions; | ||
| let ea_id = request.id; | ||
| log::info!("Executing saved: {:?}", links); | ||
| let _ = ctx.unyt_execute_saved(SAVEDExecuteInputs { | ||
| ea_id: ea_id.into(), | ||
| executor_inputs: json!({}), | ||
| links, | ||
| definition: None, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
number_of_links_processed is not used to limit links passed to unyt_execute_saved.
The comment on line 72 says "select number of links and pass only NUMBER_OF_LINKS_PROCESSED links", but links on line 79 is set to all attached_transactions without truncation. The env-var-driven limit is only applied later in the parked spend creation loop (line 124). Either implement the link limiting here or update the comment to reflect the actual behaviour.
Also, line 73 has an incomplete comment (// Check the).
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/behaviour/smart_agreements.rs` around
lines 71 - 89, The code currently passes all attached_transactions into
ctx.unyt_execute_saved (SAVEDExecuteInputs.links) without applying the env-var
limit number_of_links_processed and has a stray incomplete comment; update the
block handling TransactionDetails::GroupedParked so that you (1) remove/fix the
incomplete comment, and (2) truncate links to at most number_of_links_processed
before constructing SAVEDExecuteInputs (e.g., take the first N entries of
attached_transactions using number_of_links_processed) so unyt_execute_saved
receives the limited set; keep use of ea_id/request.id and other fields
unchanged.
| // expect 1% fees to be paid | ||
| let amount_per_agent = (amount_per_agent * Fraction::new(98, 100)?)?; |
There was a problem hiding this comment.
Comment says "1% fees" but the code deducts 2%.
Fraction::new(98, 100) retains 98%, deducting 2%, not 1%.
- // expect 1% fees to be paid
+ // expect 2% fees to be paid📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // expect 1% fees to be paid | |
| let amount_per_agent = (amount_per_agent * Fraction::new(98, 100)?)?; | |
| // expect 2% fees to be paid | |
| let amount_per_agent = (amount_per_agent * Fraction::new(98, 100)?)?; |
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/behaviour/smart_agreements.rs` around
lines 120 - 121, The comment indicates a mismatch: the code in the
smart_agreements behaviour that calculates amount_per_agent uses
Fraction::new(98, 100) which applies a 2% fee but the intent is 1% fees; update
the calculation that modifies amount_per_agent (the line using Fraction::new(98,
100)?) to use Fraction::new(99, 100)? so it retains 99% (1% fee) and keep the
same error handling/unwrap pattern already in place.
| impl DurableObject { | ||
| pub fn new() -> Self { | ||
| Self { | ||
| base_url: "https://durable-object-tmp-storage.joel-ulahanna.workers.dev".to_string(), | ||
| secret: "zo-el".to_string(), | ||
| client: reqwest::Client::new(), | ||
| } | ||
| } |
There was a problem hiding this comment.
Hardcoded URL and secret in source code.
The base URL and secret are hardcoded. The secret "zo-el" is committed to version control, which is a security concern even for a test/demo endpoint. Consider reading these from environment variables (with sensible defaults for local dev), consistent with how NUMBER_OF_LINKS_TO_PROCESS is handled elsewhere in this scenario.
Proposed approach
pub fn new() -> Self {
Self {
- base_url: "https://durable-object-tmp-storage.joel-ulahanna.workers.dev".to_string(),
- secret: "zo-el".to_string(),
+ base_url: std::env::var("DURABLE_OBJECT_URL")
+ .unwrap_or_else(|_| "https://durable-object-tmp-storage.joel-ulahanna.workers.dev".to_string()),
+ secret: std::env::var("DURABLE_OBJECT_SECRET")
+ .unwrap_or_else(|_| "zo-el".to_string()),
client: reqwest::Client::new(),
}
}🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/durable_object.rs` around lines 34 - 41,
DurableObject::new currently hardcodes base_url and secret (base_url and secret
fields); change it to read these from environment variables (e.g., getenv/var
retrieval) with sensible defaults for local dev, mirroring how
NUMBER_OF_LINKS_TO_PROCESS is handled elsewhere in this scenario; update
DurableObject::new to fallback to a safe default base_url and to not embed
secrets in source—load secret from an env var and fail fast or return an error
if it is required in non-dev contexts.
There was a problem hiding this comment.
I think this could be done in a follow up, but would be good to move this into a cloudflare account under the holochain foundation and not commit the secret.
There was a problem hiding this comment.
@mattyg, sounds good! Deferring this to a follow-up makes sense given the scope of this PR.
Would you like me to create an issue to track:
- Moving the Cloudflare Durable Object to a Holochain Foundation account
- Removing the hardcoded secret and loading it from environment variables instead
This way the work won't be forgotten.
| .execute_in_place(async move { | ||
| loop { | ||
| log::debug!( | ||
| "Attempting to get progenitor key from DurableObject: run_id={}", | ||
| run_id | ||
| ); | ||
|
|
||
| let response = self | ||
| .client | ||
| .get(&url) | ||
| .send() | ||
| .await | ||
| .context("Failed to send GET request to DurableObject")?; | ||
|
|
||
| if response.status().is_success() { | ||
| let get_response: GetResponse = response | ||
| .json() | ||
| .await | ||
| .context("Failed to parse GET response")?; | ||
|
|
||
| log::debug!( | ||
| "Successfully retrieved progenitor key: {}", | ||
| get_response.value | ||
| ); | ||
| return Ok(get_response.value); | ||
| } else if response.status() == 404 { | ||
| log::info!("Progenitor key not yet available, retrying in 2 seconds..."); | ||
| tokio::time::sleep(Duration::from_secs(2)).await; | ||
| continue; | ||
| } else { | ||
| anyhow::bail!("GET request failed with status: {}", response.status()); | ||
| } | ||
| } | ||
| }) |
There was a problem hiding this comment.
Unbounded retry loop when fetching progenitor key.
The loop on line 97 retries indefinitely on 404 responses with no maximum attempt count or overall timeout. If the initiator agent crashes before posting the progenitor key, all participant agents will spin here forever.
Proposed fix: add a max retry limit
+ let max_retries = 60; // ~2 minutes at 2s intervals
+ let mut attempts = 0;
loop {
+ attempts += 1;
+ if attempts > max_retries {
+ anyhow::bail!("Timed out waiting for progenitor key after {} attempts", max_retries);
+ }
log::debug!(
"Attempting to get progenitor key from DurableObject: run_id={}",
run_id
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .execute_in_place(async move { | |
| loop { | |
| log::debug!( | |
| "Attempting to get progenitor key from DurableObject: run_id={}", | |
| run_id | |
| ); | |
| let response = self | |
| .client | |
| .get(&url) | |
| .send() | |
| .await | |
| .context("Failed to send GET request to DurableObject")?; | |
| if response.status().is_success() { | |
| let get_response: GetResponse = response | |
| .json() | |
| .await | |
| .context("Failed to parse GET response")?; | |
| log::debug!( | |
| "Successfully retrieved progenitor key: {}", | |
| get_response.value | |
| ); | |
| return Ok(get_response.value); | |
| } else if response.status() == 404 { | |
| log::info!("Progenitor key not yet available, retrying in 2 seconds..."); | |
| tokio::time::sleep(Duration::from_secs(2)).await; | |
| continue; | |
| } else { | |
| anyhow::bail!("GET request failed with status: {}", response.status()); | |
| } | |
| } | |
| }) | |
| .execute_in_place(async move { | |
| let max_retries = 60; // ~2 minutes at 2s intervals | |
| let mut attempts = 0; | |
| loop { | |
| attempts += 1; | |
| if attempts > max_retries { | |
| anyhow::bail!("Timed out waiting for progenitor key after {} attempts", max_retries); | |
| } | |
| log::debug!( | |
| "Attempting to get progenitor key from DurableObject: run_id={}", | |
| run_id | |
| ); | |
| let response = self | |
| .client | |
| .get(&url) | |
| .send() | |
| .await | |
| .context("Failed to send GET request to DurableObject")?; | |
| if response.status().is_success() { | |
| let get_response: GetResponse = response | |
| .json() | |
| .await | |
| .context("Failed to parse GET response")?; | |
| log::debug!( | |
| "Successfully retrieved progenitor key: {}", | |
| get_response.value | |
| ); | |
| return Ok(get_response.value); | |
| } else if response.status() == 404 { | |
| log::info!("Progenitor key not yet available, retrying in 2 seconds..."); | |
| tokio::time::sleep(Duration::from_secs(2)).await; | |
| continue; | |
| } else { | |
| anyhow::bail!("GET request failed with status: {}", response.status()); | |
| } | |
| } | |
| }) |
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/durable_object.rs` around lines 96 -
129, The unbounded loop inside the execute_in_place async closure (the loop that
calls self.client.get(&url).send() and inspects response.status() to return
Ok(get_response.value)) can spin forever on 404; add a maximum retry count
and/or overall timeout for obtaining the progenitor key (use a counter like
max_retries and increment per 404, or wrap with a tokio::time::timeout) and
return an error via anyhow::bail! when the limit elapses; keep existing 2s
sleep/backoff behavior between retries and include run_id and url in the error
message for diagnostics.
| .use_agent_teardown(|ctx| { | ||
| // publish final ledger state | ||
| log::info!("Tearing down agent {}", ctx.get().cell_id().agent_pubkey()); | ||
| let ledger = ctx.unyt_get_ledger()?; | ||
| let reporter = ctx.runner_context().reporter(); | ||
| reporter.add_custom( | ||
| ReportMetric::new("ledger_state") | ||
| .with_field("ledger_balance", ledger.balance.get_base_unyt().to_string()) | ||
| .with_field("ledger_fees", ledger.fees_owed.to_string()) | ||
| .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()), | ||
| ); | ||
| let actuable_tx = ctx | ||
| .unyt_get_actionable_transactions() | ||
| .unwrap_or(Actionable { | ||
| proposal_actionable: vec![], | ||
| commitment_actionable: vec![], | ||
| accept_actionable: vec![], | ||
| reject_actionable: vec![], | ||
| }); | ||
| let completed_tx = ctx.unyt_get_completed_transactions().unwrap_or(Completed { | ||
| accept: vec![], | ||
| spend: vec![], | ||
| }); | ||
|
|
||
| let parked_spend = ctx.unyt_get_parked_spend().unwrap_or(vec![]); | ||
| let executed_agreements = ctx.unyt_get_all_my_executed_saveds().unwrap_or(vec![]); | ||
| reporter.add_custom( | ||
| ReportMetric::new("actionable_transactions") | ||
| .with_field("proposals", actuable_tx.proposal_actionable.len() as u64) | ||
| .with_field( | ||
| "commitments", | ||
| actuable_tx.commitment_actionable.len() as u64, | ||
| ) | ||
| .with_field("accepts", actuable_tx.accept_actionable.len() as u64) | ||
| .with_field("rejects", actuable_tx.reject_actionable.len() as u64) | ||
| .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()), | ||
| ); | ||
| reporter.add_custom( | ||
| ReportMetric::new("completed_transactions") | ||
| .with_field("accepts", completed_tx.accept.len() as u64) | ||
| .with_field("spends", completed_tx.spend.len() as u64) | ||
| .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()), | ||
| ); | ||
| reporter.add_custom( | ||
| ReportMetric::new("parked_spends") | ||
| .with_field("parked_spends", parked_spend.len() as u64) | ||
| .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()), | ||
| ); | ||
| reporter.add_custom( | ||
| ReportMetric::new("executed_agreements") | ||
| .with_field("number", executed_agreements.len() as u64) | ||
| .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()), | ||
| ); | ||
| log::info!("uninstalling agent {}", ctx.get().cell_id().agent_pubkey()); | ||
| uninstall_app(ctx, None).ok(); | ||
| log::info!( | ||
| "done tearing down agent {}", | ||
| ctx.get().cell_id().agent_pubkey() | ||
| ); | ||
| Ok(()) | ||
| }); |
There was a problem hiding this comment.
Early return in teardown skips uninstall_app.
If ctx.unyt_get_ledger()? on line 40 fails, the ? propagates the error and uninstall_app on line 91 is never reached. Since metric collection is best-effort but app uninstallation is important for cleanup, consider restructuring so that uninstall_app always runs.
Proposed fix: collect metrics defensively, always uninstall
.use_agent_teardown(|ctx| {
- // publish final ledger state
log::info!("Tearing down agent {}", ctx.get().cell_id().agent_pubkey());
- let ledger = ctx.unyt_get_ledger()?;
- let reporter = ctx.runner_context().reporter();
- reporter.add_custom(
- ReportMetric::new("ledger_state")
- .with_field("ledger_balance", ledger.balance.get_base_unyt().to_string())
- .with_field("ledger_fees", ledger.fees_owed.to_string())
- .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()),
- );
+ let reporter = ctx.runner_context().reporter();
+
+ // Collect metrics defensively — don't let failures prevent uninstall
+ if let Ok(ledger) = ctx.unyt_get_ledger() {
+ reporter.add_custom(
+ ReportMetric::new("ledger_state")
+ .with_field("ledger_balance", ledger.balance.get_base_unyt().to_string())
+ .with_field("ledger_fees", ledger.fees_owed.to_string())
+ .with_tag("agent", ctx.get().cell_id().agent_pubkey().to_string()),
+ );
+ }🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/main.rs` around lines 37 - 97, The
teardown closure currently uses ctx.unyt_get_ledger()? which early-returns on
error and skips cleanup; change the metrics collection to be defensive (use
ctx.unyt_get_ledger().ok() or match/if let Ok(...) to collect what you can) and
ensure uninstall_app(ctx, None) in the same closure always runs regardless of
metric collection errors; capture the metric-collection Result (or Option) and
then call uninstall_app before returning, propagating the original metric error
afterwards if needed so uninstall_app is not skipped (refer to
ctx.unyt_get_ledger, ctx.unyt_get_actionable_transactions,
ctx.unyt_get_completed_transactions and uninstall_app).
| impl UnytAgentExt for AgentContext<HolochainRunnerContext, HolochainAgentContext<ScenarioValues>> { | ||
| fn unyt_init(&mut self) -> HookResult { | ||
| let _ = self.call_zome_alliance::<_, String>("init", ())?; | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for call_zome patterns with "init" to understand return types
rg -n 'call_zome.*"init"' --type=rust -C3Repository: holochain/wind-tunnel
Length of output: 676
🏁 Script executed:
#!/bin/bash
# Also search for the zome function definition to see what it actually returns
rg -n 'fn init' --type=rust -A5 | head -50Repository: holochain/wind-tunnel
Length of output: 3363
🏁 Script executed:
#!/bin/bash
# Look for zome function signatures in common locations
find . -name "*.rs" -type f | xargs grep -l "\"init\"" | head -20Repository: holochain/wind-tunnel
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Look at the unyt scenario to see which zome it calls
cat scenarios/unyt_chain_transaction/src/unyt_agent.rsRepository: holochain/wind-tunnel
Length of output: 11922
🏁 Script executed:
#!/bin/bash
# Search for InitCallbackResult type definition to understand what it is
rg -n 'InitCallbackResult' --type=rust | head -20Repository: holochain/wind-tunnel
Length of output: 1049
🏁 Script executed:
#!/bin/bash
# Check if there's a unyt-specific zome that might have a different init signature
find . -path "*/unyt*" -name "*.rs" -type fRepository: holochain/wind-tunnel
Length of output: 534
🏁 Script executed:
#!/bin/bash
# Find the transactor zome definition
find . -path "*/transactor*" -name "*.rs" -type fRepository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search for transactor zome and its init function
rg -n 'zome.*transactor' --type=rustRepository: holochain/wind-tunnel
Length of output: 189
🏁 Script executed:
#!/bin/bash
# Search for where the transactor zome is defined
find . -name "Cargo.toml" | xargs grep -l "transactor" | head -10Repository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Look for transactor in the entire codebase to understand where it comes from
rg -n "transactor" --type=rust | head -30Repository: holochain/wind-tunnel
Length of output: 189
🏁 Script executed:
#!/bin/bash
# Check if transactor is an external dependency or defined elsewhere
rg -n "transactor" --type=tomlRepository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Look at the call_zome function to understand how zomes are called
rg -n 'fn call_zome' --type=rust -A 10Repository: holochain/wind-tunnel
Length of output: 3555
🏁 Script executed:
#!/bin/bash
# Look for transactor zome definition - it might be in a subdirectory
find . -type d -name transactorRepository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search more broadly for files containing "transactor" in the zomes directory
find ./zomes -type f -name "*.rs" | xargs grep -l "transactor" 2>/dev/nullRepository: holochain/wind-tunnel
Length of output: 49
🏁 Script executed:
#!/bin/bash
# Check the structure of zomes directory
ls -la zomes/Repository: holochain/wind-tunnel
Length of output: 985
🏁 Script executed:
#!/bin/bash
# Search for where transactor zome functions are defined
find . -type f -name "*.rs" -o -name "*.toml" | xargs grep -l "fn init" | head -20Repository: holochain/wind-tunnel
Length of output: 403
🏁 Script executed:
#!/bin/bash
# Look for rave_engine or transactor references in dependencies
find . -name "Cargo.toml" | xargs grep -E "rave|transactor" | head -20Repository: holochain/wind-tunnel
Length of output: 180
🏁 Script executed:
#!/bin/bash
# Check the scenario's Cargo.toml to see what dependencies it uses
cat scenarios/unyt_chain_transaction/Cargo.tomlRepository: holochain/wind-tunnel
Length of output: 1017
🏁 Script executed:
#!/bin/bash
# Search for documentation or tests about the transactor zome's init function
rg -n 'transactor.*init' --type=rust -B2 -A2Repository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check if there are any other calls to transactor zome to see what they expect
rg -n 'call_zome.*transactor' --type=rust -B2 -A2Repository: holochain/wind-tunnel
Length of output: 668
🏁 Script executed:
#!/bin/bash
# Look for comments or documentation about the unyt.happ or transactor zome
rg -n 'unyt\|transactor' --type=rust -i | grep -i 'comment\|doc\|return\|type' | head -20Repository: holochain/wind-tunnel
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Look for any tests or examples that use the transactor zome
find . -type f \( -name "*.rs" -o -name "*.md" \) | xargs grep -l "transactor\|unyt.happ" 2>/dev/nullRepository: holochain/wind-tunnel
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation about the unyt happ or transactor zome
find . -name "*.md" -o -name "*.txt" | xargs grep -i "transactor\|unyt" 2>/dev/null | head -20Repository: holochain/wind-tunnel
Length of output: 860
🏁 Script executed:
#!/bin/bash
# Look at the happ_builder to see if there's information about how zomes are packaged
cat happ_builder/src/lib.rs | head -100Repository: holochain/wind-tunnel
Length of output: 3334
🏁 Script executed:
#!/bin/bash
# Look for how other zome calls handle their return types
rg -n 'call_zome_alliance' scenarios/unyt_chain_transaction/src/unyt_agent.rs -A1 | head -40Repository: holochain/wind-tunnel
Length of output: 1254
🏁 Script executed:
#!/bin/bash
# Check if there are patterns of ignoring return values with let _
rg -n 'let _.*call_zome' --type=rustRepository: holochain/wind-tunnel
Length of output: 1977
Change the return type from String to the correct type for the transactor zome's init function.
let _ = self.call_zome_alliance::<_, String>("init", ())?; — All zome init functions in Holochain return InitCallbackResult, not String. This will cause deserialization to fail at runtime. Either change the type annotation to match the actual return type (likely InitCallbackResult), or if the init function doesn't need to be called, use call_zome_alliance::<_, ()> instead.
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/unyt_agent.rs` around lines 93 - 97, The
call in UnytAgentExt::unyt_init is deserializing the zome init return as String
which is incorrect; update the type parameter on self.call_zome_alliance (in
unyt_init) to the real init return type (e.g. InitCallbackResult) so it matches
the transactor zome's init signature, or if you truly don't need the return
value change it to call_zome_alliance::<_, ()>("init", ()) to ignore the result;
adjust the type used in the let binding accordingly.
| fn collect_agents(&mut self) -> Result<(), anyhow::Error> { | ||
| const MAX_NUMBER_OF_AGENTS_NEEDED: usize = 50; | ||
| if self.get().scenario_values.participating_agents.len() < MAX_NUMBER_OF_AGENTS_NEEDED { | ||
| let code_templates = self.unyt_get_code_templates_lib()?; | ||
| // collecte unity authors of the code templates | ||
| let mut unique_agents = code_templates | ||
| .iter() | ||
| .map(|template| template.author.clone()) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // remove yourself from the list | ||
| let self_key: AgentPubKeyB64 = self.get().cell_id().agent_pubkey().clone().into(); | ||
| unique_agents.retain(|agent| agent != &self_key); | ||
| // remove progenitor from the list | ||
| if let Ok(progenitor_key) = DurableObject::new().get_progenitor_key(self) { | ||
| let progenitor_b64: AgentPubKeyB64 = progenitor_key.into(); | ||
| unique_agents.retain(|agent| agent != &progenitor_b64); | ||
| } | ||
| self.get_mut().scenario_values.participating_agents = unique_agents; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
unique_agents is not actually deduplicated.
The variable is named unique_agents (line 132) but uses a plain Vec collected from template authors without deduplication. If an agent authored multiple code templates, they'll appear multiple times, causing them to receive disproportionately more spend transactions in both spend.rs and smart_agreements.rs.
Proposed fix: deduplicate the list
- let mut unique_agents = code_templates
+ let mut unique_agents: Vec<_> = code_templates
.iter()
.map(|template| template.author.clone())
.collect::<Vec<_>>();
+ unique_agents.sort();
+ unique_agents.dedup();Or more concisely using a HashSet/BTreeSet:
+ use std::collections::BTreeSet;
- let mut unique_agents = code_templates
+ let mut unique_agents: Vec<_> = code_templates
.iter()
.map(|template| template.author.clone())
- .collect::<Vec<_>>();
+ .collect::<BTreeSet<_>>()
+ .into_iter()
+ .collect();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn collect_agents(&mut self) -> Result<(), anyhow::Error> { | |
| const MAX_NUMBER_OF_AGENTS_NEEDED: usize = 50; | |
| if self.get().scenario_values.participating_agents.len() < MAX_NUMBER_OF_AGENTS_NEEDED { | |
| let code_templates = self.unyt_get_code_templates_lib()?; | |
| // collecte unity authors of the code templates | |
| let mut unique_agents = code_templates | |
| .iter() | |
| .map(|template| template.author.clone()) | |
| .collect::<Vec<_>>(); | |
| // remove yourself from the list | |
| let self_key: AgentPubKeyB64 = self.get().cell_id().agent_pubkey().clone().into(); | |
| unique_agents.retain(|agent| agent != &self_key); | |
| // remove progenitor from the list | |
| if let Ok(progenitor_key) = DurableObject::new().get_progenitor_key(self) { | |
| let progenitor_b64: AgentPubKeyB64 = progenitor_key.into(); | |
| unique_agents.retain(|agent| agent != &progenitor_b64); | |
| } | |
| self.get_mut().scenario_values.participating_agents = unique_agents; | |
| } | |
| Ok(()) | |
| } | |
| fn collect_agents(&mut self) -> Result<(), anyhow::Error> { | |
| const MAX_NUMBER_OF_AGENTS_NEEDED: usize = 50; | |
| if self.get().scenario_values.participating_agents.len() < MAX_NUMBER_OF_AGENTS_NEEDED { | |
| let code_templates = self.unyt_get_code_templates_lib()?; | |
| // collecte unity authors of the code templates | |
| let mut unique_agents: Vec<_> = code_templates | |
| .iter() | |
| .map(|template| template.author.clone()) | |
| .collect::<Vec<_>>(); | |
| unique_agents.sort(); | |
| unique_agents.dedup(); | |
| // remove yourself from the list | |
| let self_key: AgentPubKeyB64 = self.get().cell_id().agent_pubkey().clone().into(); | |
| unique_agents.retain(|agent| agent != &self_key); | |
| // remove progenitor from the list | |
| if let Ok(progenitor_key) = DurableObject::new().get_progenitor_key(self) { | |
| let progenitor_b64: AgentPubKeyB64 = progenitor_key.into(); | |
| unique_agents.retain(|agent| agent != &progenitor_b64); | |
| } | |
| self.get_mut().scenario_values.participating_agents = unique_agents; | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In `@scenarios/unyt_chain_transaction/src/unyt_agent.rs` around lines 127 - 148,
The variable unique_agents in collect_agents is populated from
unyt_get_code_templates_lib() without deduplication so authors can appear
multiple times; change the logic that builds unique_agents (in collect_agents)
to deduplicate before assigning to
self.get_mut().scenario_values.participating_agents — e.g., collect into a
HashSet/BTreeSet or sort+dedup the Vec of template.author, then remove self
(self_key) and progenitor (from DurableObject::new().get_progenitor_key(self))
from that deduplicated set and assign the resulting unique list back to
participating_agents.
ThetaSinner
left a comment
There was a problem hiding this comment.
Looking good! Only minor comments from me.
I left comments on the linked diff, not realisizing they wouldn't be visible. There are 3 comments on here 1718555
Great job with handling the differences in Wind Tunnel and the Unyt code.
| @@ -0,0 +1,14 @@ | |||
| { | |||
| "scenario_name": "unyt_chain_transaction", | |||
| "duration": 300, | |||
There was a problem hiding this comment.
Could we run for longer on the canonical run? At least 15 minutes or 900s
| The scenario records several custom metrics: | ||
|
|
||
| - `wt.custom.global_definition_propagation_time`: Records the time at which the global definition becomes readable for each agent, helping measure network initialization propagation speed | ||
| - `wt.custom.final:ledger_state`: Captures the final state of the ledger at scenario teardown for analysis |
There was a problem hiding this comment.
Agreed, I was about to comment on the same thing
| @@ -0,0 +1,36 @@ | |||
| [package] | |||
| name = "unyt_chain_transaction" | |||
mattyg
left a comment
There was a problem hiding this comment.
I think we want to add the influx dashboard.
Should we also add to the summariser, summary visualiser? Or do in a follow-up?
Summary
This imports the Wind Tunnel scenario from the Unyt fork and updates it to support the latest version of Wind Tunnel and Holochain 0.6.1.
The scenario was imported directly in a single commit and then updated to actually build and run correctly in the following commit: 1718555. Therefore, this commit is a good starting point to see the required changes and review them.
Closes #457.
TODO:
All new/edited/removed scenarios are reflected in summary visualiser tool (see checklist)to be done in Update the summarizer to support Unyt scenarios #461Note that all commits in a PR must follow Conventional Commits before it can be merged, as these are used to generate the changelog
Summary by CodeRabbit
Release Notes
New Features
Chores