Skip to content

Create OdinWorkflow and use new NeXus files #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

nvaytet
Copy link
Member

@nvaytet nvaytet commented Jun 25, 2025

Now that we have new NeXus files created by the mcstas-to-nexus notebook from #90, we can create a OdinWorkflow based on the generic Tof workflow from essreduce.

We replace most of the Odin notebook in the docs with a shorter and more-production-like notebook.

I also moved the TBL types to a common imaging.types so odin could use them as well.

TODO: we should use normalisation functions from the imaging/normalize submodule to do the normalization at the end. But we defer this to another PR.

@nvaytet nvaytet marked this pull request as ready for review June 25, 2025 20:45
Copy link
Member

@YooSunYoung YooSunYoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New names are more clear...!

Comment on lines +52 to +53
class DetectorWavelengthData(sciline.Scope[RunType, sc.DataArray], sc.DataArray):
"""Detector data with wavelength coordinate."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should really try to come up with canonical naming across projects. For example, this is call CountsWavelength in essdiffraction and something like CleanWavelength in esssans.

Copy link
Member Author

@nvaytet nvaytet Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, which one do we think is the best? I find CleanWavelength confusing personally.

Maybe we should just be much more explicit: DetectorDataWithWavelengthCoordinate or something like that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the DetectorWavelengthData came from the fact that we have named DetectorTofData the data that comes out of the tof lookup in essreduce, so it seemed natural.
But I'm happy to go with Counts or anything we find better?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, which one do we think is the best? I find CleanWavelength confusing personally.

Agree, it should be replaced.

Maybe we should just be much more explicit: DetectorDataWithWavelengthCoordinate or something like that?

I don't think that is going to work. (a) it is unusable and horrible UX, (b) you are never going to encode the full "contract" in the name. For me, CountsWavelength is kind of a short form of DetectorDataWithWavelengthCoordinate. No one cares it is a coordinate. It might not even be a real "coord" if we are talking about events.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is unusable and horrible UX

Sounds a bit harsh, but whatever...

No one cares it is a coordinate. It might not even be a real "coord" if we are talking about events.

That is true.

For me, CountsWavelength is kind of a short form of DetectorDataWithWavelengthCoordinate.

I really don't mind what we use, it was just in case we were not entirely happy with CountsWavelength. I'd just like to move forwards with the PR, so I'll just go with CountsWavelength.



def choppers(source_position: sc.Variable) -> MappingProxyType[str, DiskChopper]:
"""Hard-coded DISK_CHOPPERS dictionary for ESS ODIN choppers."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see that. Maybe an explanation of why that is? Will this be the only setting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They have not yet communicated how many settings they will have.

if (out.bins is not None) and (coord_name in out.bins.coords):
out.bins.masks[coord_name] = mask(out.bins.coords[coord_name])
else:
out.masks[coord_name] = mask(out.coords[coord_name])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't/couldn't this lead to bin-edge masks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. I copied it from Essdiffraction.
Is it enough to check if the coord is bin edges and then apply the function on the midpoints?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean you currently have masks on edges in the live workflow when you added a tof mask?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔

Comment on lines 15 to 27
DarkBackgroundRun = reduce_t.BackgroundRun
DetectorData = reduce_t.DetectorData
DiskChoppers = reduce_t.DiskChoppers
OpenBeamRun = reduce_t.EmptyBeamRun
Filename = reduce_t.Filename
FrameMonitor1 = reduce_t.FrameMonitor1
FrameMonitor2 = reduce_t.FrameMonitor2
FrameMonitor3 = reduce_t.FrameMonitor3
MonitorData = reduce_t.MonitorData
NeXusDetectorName = reduce_t.NeXusDetectorName
NeXusMonitorName = reduce_t.NeXusName
NeXusComponent = reduce_t.NeXusComponent
SampleRun = reduce_t.SampleRun
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we want to keep using these aliases. I remember it can lead to some confusion in visualizations and error messages?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imaging people use the terminology "dark" run and "open beam" run.
It would be nice if we can use these names in the workflow...

I'm happy to do away with all the other aliases.

Copy link
Member Author

@nvaytet nvaytet Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we remove the aliases, does this mean we end up with something like this in the notebooks:

from ess.reduce.nexus import types as nxs_t
from ess.reduce.time_of_flight import types as tof_t
from ess.imaging import types as img_t

wf[nxs_t.Filename[img_t.OpenBeamRun]] = odin.data.iron_simulation_ob_small()
wf[nxs_t.NeXusDetectorName] = "event_mode_detectors/timepix3"

raw_data = wf.compute(nxs_t.DetectorData[img_t.OpenBeamRun])

tof_data = wf.compute(nxs_t.DetectorTofData[img_t.OpenBeamRun])

It's not so great that we have to remember which submodule contains which type.

Can you elaborate on what were the confusions in the visualizations and error messages?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried workflow.visualize() when there are aliases in place?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So as long as the alias has the same name as the underlying type, I think it looks fine?
Screenshot_20250701_120448

But when I tried with the OpenBeamRun which was using the EmptyBeamRun from essreduce under the hood, it displays EmptyBeamRun instead:
Screenshot_20250701_120343

But I guess I should fix this by making new types here, instead of using the ones from essreduce, as you suggest in your comment further down?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So as long as the alias has the same name as the underlying type, I think it looks fine?

It looks fine, but as soon as we decide to display the full module name it won't?

Comment on lines +29 to +40
DetectorLtotal = tof_t.DetectorLtotal
DetectorTofData = tof_t.DetectorTofData
PulsePeriod = tof_t.PulsePeriod
PulseStride = tof_t.PulseStride
PulseStrideOffset = tof_t.PulseStrideOffset
DistanceResolution = tof_t.DistanceResolution
TimeResolution = tof_t.TimeResolution
LtotalRange = tof_t.LtotalRange
LookupTableRelativeErrorThreshold = tof_t.LookupTableRelativeErrorThreshold
NumberOfSimulatedNeutrons = tof_t.NumberOfSimulatedNeutrons
TimeOfFlightLookupTable = tof_t.TimeOfFlightLookupTable
SimulationResults = tof_t.SimulationResults
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it confusing that the tof parameters are now at the same level as the others? Given the names it feels odd to have, say, ess.imaging.type.TimeResolution. Do we really need the aliases? If so, should it be prefixed, or in a submodule?

But see also my concern above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is again copied from essdiffraction...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is not really an argument for doing it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, apart from the fact that it had been reviewed and merged and therefore I thought it was our currently accepted best practice 😉

Comment on lines 15 to 23
DarkBackgroundRun = reduce_t.BackgroundRun
DetectorData = reduce_t.DetectorData
DiskChoppers = reduce_t.DiskChoppers
OpenBeamRun = reduce_t.EmptyBeamRun
Filename = reduce_t.Filename
FrameMonitor1 = reduce_t.FrameMonitor1
FrameMonitor2 = reduce_t.FrameMonitor2
FrameMonitor3 = reduce_t.FrameMonitor3
MonitorData = reduce_t.MonitorData
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After improvements in Sciline there is no more need to use run or monitor types from ess.reduce. The NeXus workflow will work with any defined here. Unless we are absolutely sure that the renaming is fully opaque renaming, e.g., run types might cause a lot of confusion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm running into an issue though. I removed importing RunType, SampleRun, EmptuBeamRun etc from essreduce.
Instead, defined

SampleRun = NewType("SampleRun", int)
DarkBackgroundRun = NewType("DarkBackgroundRun", int)
OpenBeamRun = NewType("OpenBeamRun", int)

RunType = TypeVar("RunType", SampleRun, DarkBackgroundRun, OpenBeamRun)

Now, when I try to compute something using the OdinWorkflow which is based on the GenericTofWorkflow, I get things like

UnsatisfiedRequirement: Missing input node 'DiskChoppers[SampleRun]'. Affects requested targets (via providers given in parentheses):
1. DiskChoppers[SampleRun] → (ess.reduce.time_of_flight.simulation.simulate_chopper_cascade_using_tof) → SimulationResults → (ess.reduce.time_of_flight.eto_to_tof.compute_tof_lookup_table) → TimeOfFlightLookupTable

because in the GenericTofWorkflow, it is expecting the SampleRun from essreduce, but I only set the one from essimaging on the workflow, so it can't find SampleRun...

Am I doing something wrong?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jl-wynen Any idea what is going on? I think you implemented and used this more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you point me at an example. I don't see DiskCoppers in the graph.

This may be unrelated to the type vars. We used to (incorrectly) load DiskChopper objects directly from NeXus. But we have to process the loaded data first, e.g., extract scalars from time series logs. See scipp/essreduce#242

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow works fine if I use the SampleRun from essreduce instead of using a new SampleRun that I created inside ess.odin.types.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you post an example of how you use the workflow? And is it the latest version on this branch?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from ess.reduce import time_of_flight as tof
from ess.reduce.nexus import types as red_t
from typing import NewType, TypeVar
import scipp as sc

default_parameters =  {
        tof.PulsePeriod: 1.0 / sc.scalar(14.0, unit="Hz"),
        tof.PulseStride: 2,
        tof.PulseStrideOffset: None,
        tof.LookupTableRelativeErrorThreshold: 0.1,
        tof.LtotalRange: (sc.scalar(55.0, unit="m"), sc.scalar(65.0, unit="m")),
        tof.DistanceResolution: sc.scalar(0.1, unit="m"),
        tof.TimeResolution: sc.scalar(250.0, unit='us'),
    }

SampleRun = NewType("SampleRun", int)
DarkBackgroundRun = NewType("DarkBackgroundRun", int)
OpenBeamRun = NewType("OpenBeamRun", int)

RunType = TypeVar("RunType", SampleRun, DarkBackgroundRun, OpenBeamRun)

wf = tof.GenericTofWorkflow(
    run_types=[SampleRun, OpenBeamRun],
    monitor_types=[],
    tof_lut_provider=tof.TofLutProvider.TOF
)

for key, param in default_parameters.items():
    wf[key] = param

wf[red_t.DiskChoppers[SampleRun]] = {}

table = wf.compute(tof.TimeOfFlightLookupTable)

yields

UnsatisfiedRequirement: Missing input node 'DiskChoppers[SampleRun]'. Affects requested targets (via providers given in parentheses):
1. DiskChoppers[SampleRun] → (ess.reduce.time_of_flight.simulation.simulate_chopper_cascade_using_tof) → SimulationResults → (ess.reduce.time_of_flight.eto_to_tof.compute_tof_lookup_table) → TimeOfFlightLookupTable

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this makes sense. You define a custom SampleRun. This is a different class from what is defined in ESSreduce. But simulate_chopper_cascade_using_tof requires the type from ESSreduce. So your custom one which you use to insert DiskChoppers[SampleRun] does not match.

In general, if ESSreduce, or any upstream code, requires a specific run type, you have to use that exact class. We only have the flexibility to define our own downstream if upstream does not directly depend on them. E.g., if simulate_chopper_cascade_using_tof requested DiskChoppers[RunType], you would be fine.

All that to say, use the run types provided by ESSreduce and add your own to them when you constrain RunType. See how this is done in ESSdiffraction: https://github.com/scipp/essdiffraction/blob/9cbe15489cb16954de7388f712a9a55d32833e00/src/ess/powder/types.py#L52-L60

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that we should not have any SampleRun etc anywhere in essreduce? Only RunType?

Basically if I am unable to use SampleRun from essreduce (for a naming reason), then I cannot use the generic tof workflow?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess using the choppers from the SampleRun in essreduce was just a hack, because we (I) were too lazy to either implement something that would check that chopper settings were the same for all runs, or have one table per run.

Thinking about it now, there is potentially no reason why one could not have different chopper settings for different runs in the same workflow? As long as you have converted to physical coordinates such as wavelength before you e.g. normalize, then you could have one LUT for SampleRun and another for EmptyBeamRun.

So maybe we just want to have RunType everywhere?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants