You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The text was updated successfully, but these errors were encountered:
frankzflyward
changed the title
Is it possible to run NanoLLM on datacenter or desktop GPU like A-series, T-series or 40-series?
Is it possible to run NanoLLM on data center GPU like A-series, T-series or desktop GPU like 40-series?
Aug 30, 2024
frankzflyward
changed the title
Is it possible to run NanoLLM on data center GPU like A-series, T-series or desktop GPU like 40-series?
Is it possible to run NanoVLM on data center GPU like A-series, T-series or desktop GPU like 40-series?
Aug 30, 2024
@frankzflyward I've not been able to try porting it, I've been meaning to split the plugins/agents into another repo to make that easier on both accounts. Then in theory you would just need to have PyTorch, Transformers, MLC, and optionally AWQ installed (which all of those are available on x86, most of the containerization/build process is getting them & others running on Jetson/aarch64)
Has anyone tried this yet?
The text was updated successfully, but these errors were encountered: