Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train issues #22

Open
ctxya1207 opened this issue May 29, 2024 · 18 comments
Open

train issues #22

ctxya1207 opened this issue May 29, 2024 · 18 comments

Comments

@ctxya1207
Copy link

how to train on the AIGC dataset

@zwx8981
Copy link
Owner

zwx8981 commented May 29, 2024

I think the only two things to do are: (1) Write a data loader of the target AIGC dataset (2) Modify the training code by omitting the two auxiliary tasks (scene classification and distortion type identification) if you only want to train with quality labels.

@ctxya1207
Copy link
Author

我认为唯一要做的两件事是:(1)编写目标 AIGC 数据集的数据加载器(2)如果只想使用质量标签进行训练,则通过省略两个辅助任务(场景分类和失真类型识别)来修改训练代码。

thank you, one more question, I want to know which one is the quality score fidelity loss function?

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

@ctxya1207 I have updated the code by adding a script to enable single-database training of LIQE with quality labels only. See Readme:

python train_liqe_single.py

@ctxya1207
Copy link
Author

def loss_m(y_pred, y):
"""prediction monotonicity related loss"""
assert y_pred.size(0) > 1 #
preds = y_pred-(y_pred + 10).t()
gts = y.t() - y
triu_indices = torch.triu_indices(y_pred.size(0), y_pred.size(0), offset=1)
preds = preds[triu_indices[0], triu_indices[1]]
gts = gts[triu_indices[0], triu_indices[1]]
return torch.sum(F.relu(preds * torch.sign(gts))) / preds.size(0),this loss function Is it computational fidelity loss?

@ctxya1207
Copy link
Author

In fact, I want to use the fidelity loss function when predicting the consistency score in AIGC image quality evaluation. I don't know how to write this function

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

We have several implementation variants of fidelity loss. By default, we use loss_m4 in our original implementation, which adopts the predicted quality, the number of images sampled from each dataset, and the ground-truth quality as input, and compute the fidelity loss on each dataset and average them into the final loss value.

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

If you only want the fidelity loss, loss_m3 would be fine. loss_m is an implementation of margin ranking loss, not fidelity loss.

@ctxya1207
Copy link
Author

thank you very much,Can you give me a contact method so that we can communicate better?

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

feel free to contact me via e-mail [email protected]

@ctxya1207
Copy link
Author

def loss_m3(y_pred, y):
"""prediction monotonicity related loss"""
assert y_pred.size(0) > 1 #
y_pred = y_pred.unsqueeze(1)
y = y.unsqueeze(1)
preds = y_pred-y_pred.t()
gts = y - y.t()

#signed = torch.sign(gts)

triu_indices = torch.triu_indices(y_pred.size(0), y_pred.size(0), offset=1)
preds = preds[triu_indices[0], triu_indices[1]]
gts = gts[triu_indices[0], triu_indices[1]]
g = 0.5 * (torch.sign(gts) + 1)

constant = torch.sqrt(torch.Tensor([2.])).to(preds.device)
p = 0.5 * (1 + torch.erf(preds / constant))

g = g.view(-1, 1)
p = p.view(-1, 1)

loss = torch.mean((1 - (torch.sqrt(p * g + esp) + torch.sqrt((1 - p) * (1 - g) + esp))))

return loss,In this function, if the size of y is (batch_size, 1), should unsqueeze(1) be removed?

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

Yes

@ctxya1207
Copy link
Author

python train_liqe_single.py, in this file, total_loss = total_loss + 0.1*refine_loss, what does refine_loss mean, and why is the previous weight 0.1

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

Sorry, that's an uncleaned code. I've fixed it. Try it again.

@ctxya1207
Copy link
Author

Sorry, that's an uncleaned code. I've fixed it. Try it again.

running_loss = beta * running_loss + (1 - beta) * total_loss.data.item(),Why is beta set to 0.9?

@zwx8981
Copy link
Owner

zwx8981 commented May 30, 2024

This is only a momentum factor to compute the moving average loss, which does not affect the training effect.

@ctxya1207
Copy link
Author

num_steps_per_epoch = 200, May I ask if this variable is equivalent to batch_size?

@ctxya1207
Copy link
Author

        print('...............current average best...............')
        print('best average epoch:{}'.format(best_epoch['avg']))
        print('best average result:{}'.format(best_result['avg']))
        for dataset in srcc_dict.keys():
            print_text = dataset + ':' + 'scene:{}, distortion:{}, srcc:{}'.format(
                scene_dict[dataset], type_dict[dataset], srcc_dict[dataset])
            print(print_text)

        print('...............current quality best...............')
        print('best quality epoch:{}'.format(best_epoch['quality']))
        print('best quality result:{}'.format(best_result['quality']))
        for dataset in srcc_dict1.keys():
            print_text = dataset + ':' + 'scene:{}, distortion:{}, srcc:{}'.format(
                scene_dict1[dataset], type_dict1[dataset], srcc_dict1[dataset])
            print(print_text),What is the difference between avg best and  quality best

@122443
Copy link

122443 commented Jul 11, 2024

Can you help me answer this? Why would an error be reported when running: FileNotFoundError: [Errno 2] No such file or directory: '/ IQA_Database/databaserelease2/gblur/img143.bmp'
I have already changed the path to the IQA_database file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants