Skip to content

Releases: ggerganov/llama.cpp

b2824

09 May 00:59
4426e29
Compare
Choose a tag to compare
cmake : fix typo (#7151)

b2822

09 May 00:13
bc4bba3
Compare
Choose a tag to compare
Introduction of CUDA Graphs to LLama.cpp (#6766)

* DRAFT: Introduction of CUDA Graphs to LLama.cpp

* FIx issues raised in comments

* Tidied to now only use CUDA runtime (not mixed with driver calls)

* disable for multi-gpu and batch size > 1

* Disable CUDA graphs for old GPU arch and with env var

* added missing CUDA_CHECKs

* Addressed comments

* further addressed comments

* limit to GGML_ALLOW_CUDA_GRAPHS defined in llama.cpp cmake

* Added more comprehensive graph node checking

* With mechanism to fall back if graph capture fails

* Revert "With mechanism to fall back if graph capture fails"

This reverts commit eb9f15fb6fcb81384f732c4601a5b25c016a5143.

* Fall back if graph capture fails and address other comments

* - renamed GGML_ALLOW_CUDA_GRAPHS to GGML_CUDA_USE_GRAPHS

- rename env variable to disable CUDA graphs to GGML_CUDA_DISABLE_GRAPHS

- updated Makefile build to enable CUDA graphs

- removed graph capture failure checking in ggml_cuda_error
  using a global variable to track this is not thread safe, but I am also not safistied with checking an error by string
  if this is necessary to workaround some issues with graph capture with eg. cuBLAS, we can pass the ggml_backend_cuda_context to the error checking macro and store the result in the context

- fixed several resource leaks

- fixed issue with zero node graphs

- changed fixed size arrays to vectors

- removed the count of number of evaluations before start capturing, and instead changed the capture mode to relaxed

- removed the check for multiple devices so that it is still possible to use a single device, instead checks for split buffers to disable cuda graphs with -sm row

- changed the op for checking batch size to GGML_OP_ADD, should be more reliable than GGML_OP_SOFT_MAX

- code style fixes

- things to look into
  - VRAM usage of the cudaGraphExec_t, if it is significant we may need to make it optional
  - possibility of using cudaStreamBeginCaptureToGraph to keep track of which ggml graph nodes correspond to which cuda graph nodes

* fix build without cuda graphs

* remove outdated comment

* replace minimum cc value with a constant

---------

Co-authored-by: slaren <[email protected]>

b2821

08 May 23:52
c12452c
Compare
Choose a tag to compare
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)

b2820

08 May 22:56
9da243b
Compare
Choose a tag to compare
Revert "llava : add support for moondream vision language model (#6899)"

This reverts commit 46e12c4692a37bdd31a0432fc5153d7d22bc7f72.

b2818

08 May 22:43
26458af
Compare
Choose a tag to compare
metal : use `vm_allocate` instead of `posix_memalign` on macOS (#7078)

* fix: use `malloc` instead of `posix_memalign` in `ggml-metal.m` to make it not crash Electron proccesses

* fix: typo

* fix: use `vm_allocate` instead of `posix_memalign`

* fix: don't call `newBufferWithBytesNoCopy` with `NULL` when `ggml_metal_host_malloc` returns `NULL`

* fix: use `vm_allocate` only on macOS

b2817

08 May 18:14
83330d8
Compare
Choose a tag to compare
main : add --conversation / -cnv flag (#7108)

b2816

08 May 17:43
465263d
Compare
Choose a tag to compare
sgemm : AVX Q4_0 and Q8_0 (#6891)

* basic avx implementation

* style

* combine denibble with load

* reduce 256 to 128 (and back!) conversions

* sse load

* Update sgemm.cpp

* oops

oops

b2815

08 May 17:15
911b390
Compare
Choose a tag to compare
server : add_special option for tokenize endpoint (#7059)

b2813

08 May 14:52
229ffff
Compare
Choose a tag to compare
llama : add BPE pre-tokenization for Qwen2 (#7114)

* Add BPE pre-tokenization for Qwen2.

* minor : fixes

---------

Co-authored-by: Ren Xuancheng <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>

b2812

08 May 14:50
1fd9c17
Compare
Choose a tag to compare
clean up json_value & server_log (#7142)