You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"""Initialize the workflow with all static information.
The only argument we need is the path to the nexus file or
a file object that workflow can read from,
since the file carrys all the static information.
"""
...
This works, but while working on scipp/esssans#179 it became clear that something is missing: We will have to configure the underlying (non-streaming) workflow with parameters, typically given by a user, before fully creating and running the live workflow. In the linked PR this is circumvented by hard-coding for a specific run, but this is only possible because we a testing with and event stream generated from a fixed file.
I think we will want to make the base workflow (a sciline.Pipeline) somehow injectable into the live workflow, e.g., as in make_sample_run_workflow from scipp/esssans#179. As the users will need to provide parameters we should design this in a way that allows for using our existing workflow widgets module for this. The building blocks we need are:
The live workflow.
The base workflow.
Parameters for the live workflow.
Parameters for the base workflow.
There is no 1:1 mapping between these. For example, we may have:
Live workflow chosen from (transmission-run, sample-run).
Base workflow chosen from (loki, skadi)
I don't know yet if this will be freely composable or if we need to define a per-workflow list (for example, a live workflow could expose a property that defines all supported base workflows).
It is probably easiest to sit down together and just play with a few options, to see what works, then update the live workflow protocol accordingly.
The text was updated successfully, but these errors were encountered:
I think scipp/sciline#92 could be valuable for this: We could use widgets to setup a pipeline, save the pipeline with params, the run Beamlime which can restore the pipeline. This would decouple the running of Beamlime from the widgets (which currently run only in a Jupyter Notebook).
The other option would be to make a more final decision on the front-end (visualization), so pipeline config can be implemented there. I feel this design would be troublesome though, since it would lock us more into a given front-end, making switching it out later harder. I think we should therefore try to find a solution in which configuration of workflows can be split from running the dashboard. Maybe a pragmatic approach could be to simply write such "config" in a Python file for now?
We recently converged on a new (stateful) protocol for live workflows:
beamlime/src/beamlime/workflow_protocols.py
Lines 22 to 31 in 5185358
This works, but while working on scipp/esssans#179 it became clear that something is missing: We will have to configure the underlying (non-streaming) workflow with parameters, typically given by a user, before fully creating and running the live workflow. In the linked PR this is circumvented by hard-coding for a specific run, but this is only possible because we a testing with and event stream generated from a fixed file.
I think we will want to make the base workflow (a
sciline.Pipeline
) somehow injectable into the live workflow, e.g., as inmake_sample_run_workflow
from scipp/esssans#179. As the users will need to provide parameters we should design this in a way that allows for using our existing workflow widgets module for this. The building blocks we need are:There is no 1:1 mapping between these. For example, we may have:
I don't know yet if this will be freely composable or if we need to define a per-workflow list (for example, a live workflow could expose a property that defines all supported base workflows).
It is probably easiest to sit down together and just play with a few options, to see what works, then update the live workflow protocol accordingly.
The text was updated successfully, but these errors were encountered: