-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How #8
Comments
Yeah, I think so. |
But the result of GIGA-Aff is higher than the paper. |
I am currently making a graduation project based on your thesis, so I am eager to know the specific epochs of your work. You can also reply me by email [email protected]. |
That's possible, different devices and different random seeds can give different results. I suggest running more tests with more different random seeds. BTW, how much higher? |
5 or 6 percentage points |
Is that the average result from multiple random seeds? |
Yes, the random seeds are [0, 1, 2, 3, 4]. The epoches are 20. I tested twice, the results were both higher than the paper. |
Hmmm, how about the result of GIGA? Is it better than GIGA-Aff? |
emm, the result of GIGA-Aff sometimes are better than GIGA. About 1 percent.
…---Original---
From: "Zhenyu ***@***.***>
Date: Wed, Oct 27, 2021 06:42 AM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
Hmmm, how about the result of GIGA? Is it better than GIGA-Aff?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Hmmm, that's weird. Have you checked the loss curve and made sure they both converged? |
Hi, I trained GIGA-Aff for 10 epoches, the result is higher 5 percentage points than the paper. I wonder if there is a problem with the code. |
How about the training figure of GIGA? Have you trained GIGA? |
So the GIGA trained with the same number of epochs perform worse than GIGA-Aff? |
yes
…---Original---
From: "Zhenyu ***@***.***>
Date: Sun, Oct 31, 2021 05:10 AM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
So the GIGA trained with the same number of epochs perform worse than GIGA-Aff?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Hmmm, that's weird. What scenario are you using? Packed or pile? |
packed.
Is the cause of network instability?
…---Original---
From: "Zhenyu ***@***.***>
Date: Mon, Nov 1, 2021 03:11 AM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
Hmmm, that's weird. What scenario are you using? Packed or pile?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Not sure about that. GIGA should perform better than GIGA-Aff, especially in packed scenarios. |
Sorry to reply you now. It's weird. I retrained giga-aff, and the result was lower than before, even lower than the paper. |
Do you retrain with the same setting? |
on the different computer
…---Original---
From: "Zhenyu ***@***.***>
Date: Wed, Nov 3, 2021 23:23 PM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
Do you retrain with the same setting?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer? |
Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different. |
1 similar comment
Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different. |
The loss curve can be different. I think training on different computers is OK. The important thing is testing on the same computer so that after fixing the random seed, the generated scenes will be the same. (I should have asked if you test them on the same computer, it was a typo.) |
I tested them on the same computer.
…---Original---
From: "Zhenyu ***@***.***>
Date: Fri, Nov 5, 2021 23:23 PM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
The loss curve can be different. I think training on different computers is OK. The important thing is testing on the same computer so that after fixing the random seed, the generated scenes will be the same. (I should have asked if you test them on the same computer, it was a typo.)
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Not sure why this happens. I'll look back to this and re-train by myself later. |
thank you very much.
比心🙆,非常感谢!
…---Original---
From: "Zhenyu ***@***.***>
Date: Fri, Nov 5, 2021 23:43 PM
To: ***@***.***>;
Cc: ***@***.***>;"State ***@***.***>;
Subject: Re: [UT-Austin-RPL/GIGA] How (Issue #8)
Not sure why this happens. I'll look back to this and re-train by myself later.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Are giga, vgn and giga-aff are the same epoches? Are they all 20 epoches?
The text was updated successfully, but these errors were encountered: