You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanna know if I just set the batch size to 1 as the paper described, is there a signifigent performance drop? What's the reason of the design for the decoder? Is it necessary to use low-level feature?
The text was updated successfully, but these errors were encountered:
And could you release/explain the code for updating CAC between stages? In the released code, there exists a moving-average implementation. Which one did you use?
The batch size was set to 1 when I trained the model. Then in my re-implementation, I set it to 4 for a quick convergence. Both settings would be fine for the final performance. Plz note that you have to increase the crop size if you choose to use 1 as the batch size.
\ 3. Very thank you for what you have pointed out. I will check it out.
I have uploaded the CAC code (cac.py) and modified the instructions.
Thanks for releasing codes!
I found several differences between the paper and the released code:
CAG_UDA/models/aspp.py
Line 81 in b6fbea6
CAG_UDA/models/decoder.py
Line 36 in b6fbea6
I wanna know if I just set the batch size to 1 as the paper described, is there a signifigent performance drop? What's the reason of the design for the decoder? Is it necessary to use low-level feature?
The text was updated successfully, but these errors were encountered: