Replies: 2 comments
-
|
re-run using a specific task never worked for me either. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
a possible workaround is to put the retry in your task in the workflow if you expect network gremlins to cause failures. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a relatively complex workflow that is doing some provisioning work. Many of the tasks are somewhat long-running and can fail for various reasons. For this reason, all of those tasks use
with-itemsso that the workflow can bere-runusing the--tasksflag and the--no-resetflag.The each subsequent task relies on the published values from the previous tasks to "know" what to do. However, I've found that, when using
re-runthe execution created by the requestedre-runhas identicalresults/outputto the original execution. The prior execution's information is visible before the requestedre-runexecution is even finished.This makes sense to me, in that, the intention is to re-use the original workflow execution. However, it appears that the
result/outputof the requestedre-runexecution does not update the output of the original execution. Essentially in a case with one or more items, where there is a failure, there appears to be no way to actually reference the new executions results:In ☝️ scenario, I would respond by running:
However, the
publishedvalues of the nowsuccessfultask2foritem1are either inaccessible, resulting inTypeErrorsor in a simplified workflow, where the output of the tasks are not consumed, theresultof thatre-runexecution is identical to the original execution. I posted about this in the#communitychannel in Slack, but I'm adding here, just in case there are more eyes on these discussions. The following are some minimal files that set up the scenario I've described and their output:dummy_action2.yaml:dummy_workflow2.yaml:dummy_python_action.yaml:dummy_action.py☝️ I wanted to verify that I wasn't just missing something, so I created this example for my own edification. The
dummy_action.pypythonActionuses a value in the datastore to determine whether or not a task spun outwith-itemsthat includeshost1as input succeeds or fails. The test goes as follows:I set the datastore value to
falseensuringhost1will fail:And run the action:
The output is as expected. The
dummy_pythontask fails forhost1. Theworkflowsucceeds because it ends with anoopaction that always succeeds. I change the datastore value totrueso that uponre-runhost1will succeed:And execute the
re-run:The output of these to executions, the original and the
re-runare identical, save timestamps and the table calling out the actions that were actually run in the workflow. Here is thediffoutput:There are two things that prick my senses. The biggest thing by far, is that, the new execution
result/outputis no where to be found. The nowsucceededsecond execution ofdummy_python_actionagainsthost1has aresult/output, but it is no where to be found (unless I specifically pull it usingst2). The fact that there-runexecution doesn't have it'soutput/resultupdated means that it isn't possible to consume that output to resume a workflow that has tasks beyond these, that rely on that output.I'm uncertain if this is intentional. It is a hinderance for my use-case so I'll need to figure out a way to work around it, but if it is in fact unintentional, I'd be happy to file a bug (and hopefully find some time to work on it). The scenario I've created above, is a best-case for this particular situation. As I mentioned, in cases where subsequent actions actually try to make use of the
outputfrom there-runtask, it often results in variousTypeErrors:With all of this said, the TLDR:
re-runexecutions be updated with the outcomes produced by thetasksencapsulated by it? (I think so)Beta Was this translation helpful? Give feedback.
All reactions