You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello there!
I recently implemented the side-refinement part of CTPN on TensorFlow. During training, I noticed that the regression loss of side-refinement offset part (Lore) is relatively quite smaller than other parts. For example, Lvre is often 100 to 200 times bigger than Lore.
I wonder whether my implementation of the side-refinement part is right or not...
The text was updated successfully, but these errors were encountered:
hmm...in my implementation of me, Lv(re) is often 50 to 100 times smaller than Lo(re).
My way is as follows:
Example: the yellow bbox is ground truth( gt), the black box(bl) is considering anchor and I calculate xside(of bl)=(x_leftside,x_rightside):(wa=16)
x_leftside=d1/16(dark green arrow)
x_rightside=d2/16(bright blue arrow)
(d1,d2 is distance(x-axis) of gt center of anchor to the side left and right, respectively )
Hello there!
I recently implemented the side-refinement part of CTPN on TensorFlow. During training, I noticed that the regression loss of side-refinement offset part (Lore) is relatively quite smaller than other parts. For example, Lvre is often 100 to 200 times bigger than Lore.
I wonder whether my implementation of the side-refinement part is right or not...
The text was updated successfully, but these errors were encountered: