Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metrics #2540

Open
LorenzoF6 opened this issue Jan 27, 2025 · 7 comments
Open

metrics #2540

LorenzoF6 opened this issue Jan 27, 2025 · 7 comments

Comments

@LorenzoF6
Copy link

With the release of the v2 of anomalib, how it is possibile to calculate in test the precision/recall/f1Score and AUROC for both task (classification/segmentation)? I see that in the previous realese that i have to set some parameters into the Engine object, it is the same in v2? Someone can explain to me how it's done ?

Thansk a lot

@alexriedel1
Copy link
Contributor

Hey! You will find some example how to do evaluation with different metrics here: https://anomalib.readthedocs.io/en/latest/markdown/guides/how_to/evaluation/evaluator.html

If you have some specific questions, maybe you can share some code.

@LorenzoF6
Copy link
Author

LorenzoF6 commented Jan 27, 2025

so for examples if i wrtite this

Create the datamodule

datamodule = Folder(
name="dataset_name",
root=data_root, # some as '/content/drive/MyDrive/Dataset/dataset_name'
normal_dir= 'train/good',
abnormal_dir='testt/defective',
task="classification",
train_transform=train_transform, #resize to (256,256)
)

Setup the datamodule

datamodule.setup()

metrics

test_metrics = [
AUROC(fields=["pred_score", "gt_label"]), # Image-level AUROC
F1Score(fields=["pred_label", "gt_label"]), # Image-level F1
]
evaluator = Evaluator( test_metrics=test_metrics)

Setup the enviorment

model = Patchcore(evalutator = evalutator)
engine = Engine()

Train a Patchcore model on the given datamodule

engine.fit(model=model, datamodule=datamodule)

Test pahase

test_results = engine.test(
model=model,
datamodule=datamodule,
ckpt_path=engine.trainer.checkpoint_callback.best_model_path,
)

It's correct ? How and where it is possible to see and retrive the metrics ? And if i want to calcute the recall how can i do it?

Thanks

@alexriedel1
Copy link
Contributor

alexriedel1 commented Jan 28, 2025

Yes it should work this way. However it looks like you are using version 1 API (the folder datamodule doesn't have a task and train_transform argument in v2 https://anomalib.readthedocs.io/en/v2.0.0-beta.1/markdown/guides/reference/data/datamodules/image/folder.html)

@LorenzoF6
Copy link
Author

LorenzoF6 commented Jan 28, 2025

Sorry my bad. Infact i have to use the pre_processor to perform the transformation and the resizing
But the anomalib's model give basic metrics also when i not provide the evalutator?
For the recall (at image and pixel level) anomalib provide any api or function or i have to calcute my self, using for example the recall_score function from sklearn?

Thansk a lot to help me to understand!

@alexriedel1
Copy link
Contributor

Yes you have to write a wrappe around the torchmetrics Recall class as a subclass of AnomalibMetric. There is an example provided here: https://anomalib.readthedocs.io/en/v2.0.0-beta.1/markdown/guides/how_to/evaluation/metrics.html

@LorenzoF6
Copy link
Author

if i write the wrapper for Recall directyl on Colab and then i call the new class in the evalutator should work? Or i must put it toghether with the others metrics?

@alexriedel1
Copy link
Contributor

You can write it in colab yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants