Feature request - Edge-ready models #21
Replies: 6 comments 1 reply
-
Hi,
Well, I have raised issues in pytorch/pytorch#45957 and microsoft/onnxjs#233. Thb, looks like an upstream problem to me, a similar issue was fixed in Maybe someone proficient in JS can provide a quick fix / some monkey patching though
Our main framework is PyTorch and we do not have enough time and resources to do this
In the short-term it can be done via sacrificing model quality, but this contradicts our design philosophy - there should always be one model for any language |
Beta Was this translation helpful? Give feedback.
-
Good news |
Beta Was this translation helpful? Give feedback.
-
The current aspirational targets for V2:
|
Beta Was this translation helpful? Give feedback.
-
New experimental model is 25M quantized |
Beta Was this translation helpful? Give feedback.
-
Most of these bullets are delivered |
Beta Was this translation helpful? Give feedback.
-
Mostly approaching this goal I am working on a model with 5-7M params as an experiment. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
🚀 Feature
Your current models are well-suited for cloud/server inference, bun unsuited for edge (i.e. web/mobile) inference.
Problems
Possible solutions
Additional info
ONNX.js error:
TypeError: int64 is not supported
(failed to load model)TF.js error:
Error: Negative size values should be exactly -1 but got NaN for the slice() size at index 0.
(multiple conversion runs, cpu backend, debug enabled)Beta Was this translation helpful? Give feedback.
All reactions