You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
by default, cegispro2 tries to compute wp-superinvariants for verifying upper bounds on weakest preexpectations of the form wp[loop](post) <= prop (see our paper for details). For your example, cegispro2 does not terminate because the property that is to be proven does not hold, i.e., wp[loop](x) <= x does not hold.
The lower bound property x <= wp[loop](x), on the other hand, does hold. You can synthesize a wp-subinvariant by running
python3 -m cegispro2.cmd ./benchmarks/TACAS23_EXIST/Bin01_0.pgcl --post "x" --prop "x" --invarianttype sub
(--optimizing-synthesizer is not an optimization flag but a synthesizer based on Optimization Modulo Theories. It is not included in our paper and I recommend not to use it.)
All benchmarks from our paper are included in our Zenodo artifact. Under
\cegispro2\cegispro2\benchmarks\scripts\TACAS23
you can find scripts including all calls to cegispro2 we used to generate the benchmarks.
I wanted to know how is it possible to generate/verification expectations for these examples.
Could you please point me some examples where expectations are getting computed? I see that there is an expectation module here,
Also when running this benchmark with optimization flags, the execution runs for more than 6 hours with no results produced.
Am I missing something here?
Thanks.
The text was updated successfully, but these errors were encountered: