Skip to content

Commit

Permalink
add blog link (ggerganov#6222)
Browse files Browse the repository at this point in the history
  • Loading branch information
NeoZhangJianyu authored and hodlen committed Apr 3, 2024
1 parent 9763c8d commit a937c97
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions README-sycl.md
Expand Up @@ -29,6 +29,7 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building).
## News

- 2024.3
- A blog is published: **Run LLM on all Intel GPUs Using llama.cpp**: [intel.com](https://www.intel.com/content/www/us/en/developer/articles/technical/run-llm-on-all-gpus-using-llama-cpp-artical.html) or [medium.com](https://medium.com/@jianyu_neo/run-llm-on-all-intel-gpus-using-llama-cpp-fd2e2dcbd9bd).
- New base line is ready: [tag b2437](https://github.com/ggerganov/llama.cpp/tree/b2437).
- Support multiple cards: **--split-mode**: [none|layer]; not support [row], it's on developing.
- Support to assign main GPU by **--main-gpu**, replace $GGML_SYCL_DEVICE.
Expand Down

0 comments on commit a937c97

Please sign in to comment.