-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation of heuristicsmineR's causal nets using R's pm4py package #6
Comments
Thanks for the very detailed bug report. This may be an issue in the source library PM4Py or the R bridge package |
I tried to reproduce and get the following results:
This gives the following differences: Fitness
Precision
Same happens when using the individual fitness evaluation function:
So, something is definitely wrong. |
However, I see that the default When comparing the result between the
Gives:
I will ask at the PM4Py project if the token replay is expected to be randomised. Maybe it makes sense to change the default to |
With R versions 3.6.1 and 3.6.2, using pm4py's evaluation functions on heuristicsmineR's causal nets converted to petri nets seems to give evaluation results which vary randomly.
For example, using the L_heur_1 provided directly with heuristicsmineR and used as in https://github.com/bupaverse/heuristicsmineR, we get the following petri net:
Now, using the evaluation_all() function provided with pm4py and directly writing the net's final marking:
The same command exetuted once more gives us the following result:
All values have changed, particularly the perc_fit_traces.
However, the amount of different values the function will output seems to be finite and to depend on the number of unique traces present in the original log.
The text was updated successfully, but these errors were encountered: