Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using custom loss with VisionClassifierTrainer #40

Open
guneetsk99 opened this issue May 10, 2022 · 3 comments
Open

Using custom loss with VisionClassifierTrainer #40

guneetsk99 opened this issue May 10, 2022 · 3 comments

Comments

@guneetsk99
Copy link

I wanted to use a custom loss function with VIT how should I proceed since in the VisionClassifierTrainer there is no use of loss

@qanastek
Copy link
Owner

Hi,

Since the HugsVision VisionClassifierTrainer is based on the HuggingFace transformers library and especially on their Trainer, we cannot tweak the loss function.

image

@guneetsk99
Copy link
Author

guneetsk99 commented May 10, 2022

Thanks @qanastek for the prompt response and its really an amazing library
If possible can you suggest how I can use your models/ directory since there I can change the crossentropy loss to for eg KLDivLoss for testing

@qanastek
Copy link
Owner

HugsVision isn't really suited for reasearch. But you can directly rewrite the Transformers class ViTForImageClassification for the single_label_classification task.

In my opinion, it's the simplest way to do. Clone the Transformers repository, install it locally using pip install --editable . and modify the loss in the class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants