Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Condition Sets can incorrectly evaluate telemetry objects that update infrequently #7992

Open
2 of 7 tasks
akhenry opened this issue Jan 24, 2025 · 0 comments
Open
2 of 7 tasks

Comments

@akhenry
Copy link
Contributor

akhenry commented Jan 24, 2025

Summary

An issue has been identified with the evaluation of condition sets in Open MCT.

It affects the evaluation of condition sets that are based on parameters that are only irregularly updated.

The impact is that condition sets can indicate an incorrect state, and thus any associated conditional styling can be incorrect.

The issue is caused by a performance optimization that short-circuits the exhaustive evaluation of all conditions when a condition evaluates to true. Conditions and criteria store their last result evaluation result, which is based on the telemetry value provided the last time they were executed, and is not based on the current value of the telemetry associated with that criterion. Because of short-circuit evaluation, that evaluation result can be stale, resulting in incorrect evaluation of conditions subsequently.

This has gone unnoticed for some time because if telemetry is updated regularly then the condition set will not stay in an incorrect state for very long before a new telemetry value arrives, causing criteria and conditions to be re-evaluated with fresh values.

Environment

  • Open MCT Version: 1.2.4+

Impact Check List

  • Data loss or misrepresented data?
  • Regression? Did this used to work or has it always been broken?
  • Is there a workaround available?
  • Does this impact a critical component?
  • Is this just a visual bug with no functional impact?
  • Does this block the execution of e2e tests?
  • Does this have an impact on Performance?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant