You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot to the Brevitas community for creating such a fantastic tool!
By browsing the documentation of Brevitas I just want to have a complete take-home message to answer "Why we need Brevitas over the Pytorch built-in quantization". At least I could see Brevitas provides better support on the following points:
quantization with arbitrary precision: there is no doubt this helps to determine where the quantization limit of the model under test is.
fine granularity of quantized tensor: much more freedom to set the combination of (activation, weights, bias) under quantization.
Besides the above, I'd like to see your thoughts if any other advances I missed, especially the comments from the developer team.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Thanks a lot to the Brevitas community for creating such a fantastic tool!
By browsing the documentation of Brevitas I just want to have a complete take-home message to answer "Why we need Brevitas over the Pytorch built-in quantization". At least I could see Brevitas provides better support on the following points:
Besides the above, I'd like to see your thoughts if any other advances I missed, especially the comments from the developer team.
Best regards,
Chenster
Beta Was this translation helpful? Give feedback.
All reactions