v3.0.0-beta.30
Pre-release
Pre-release
3.0.0-beta.30 (2024-06-17)
Bug Fixes
- avoid duplicate context shifts (#241) (1e7c5d0)
onProgress
onModelDownloader
(#241) (1e7c5d0)- re-enable CUDA binary compression (#241) (1e7c5d0)
- more thorough tests before loading a binary (#241) (1e7c5d0)
- increase compatibility of prebuilt binaries (#241) (1e7c5d0)
Shipped with llama.cpp
release b3166
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)