Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync : llama.cpp #773

Merged
merged 35 commits into from Mar 27, 2024
Merged

sync : llama.cpp #773

merged 35 commits into from Mar 27, 2024

Commits on Mar 27, 2024

  1. gguf : add support for I64 and F64 arrays (llama/6062)

    * gguf : add support for I64 and F64 arrays
    
    GGML currently does not support I64 or F64 arrays and they are not often
    used in machine learning, however if in the future the need arises, it
    would be nice to add them now, so that the types are next to the other
    types I8, I16, I32 in the enums, and it also reserves their type number.
    
    Furthermore, with this addition the GGUF format becomes very usable for
    most computational applications of NumPy (being compatible with the most
    common NumPy dtypes: i8, i16, i32, i64, f32, f64), providing a faster,
    and more versatile alternative to the `npz` format, and a simpler
    alternative to the `hdf5` format.
    
    The change in this PR seems small, not significantly increasing the
    maintenance burden. I tested this from Python using GGUFWriter/Reader
    and `gguf-dump`, as well as from C, everything seems to work.
    
    * Fix compiler warnings
    certik authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    aeb0020 View commit details
    Browse the repository at this point in the history
  2. Fix non-intel device selection (llama/6042)

    * Fix non-intel device selection
    
    * Update ggml-sycl.cpp
    
    Co-authored-by: Neo Zhang Jianyu <[email protected]>
    
    * Update ggml-sycl.cpp
    
    Co-authored-by: Neo Zhang Jianyu <[email protected]>
    
    ---------
    
    Co-authored-by: Abhilash Majumder <[email protected]>
    Co-authored-by: Neo Zhang Jianyu <[email protected]>
    3 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    323c87d View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    aab2444 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    3359229 View commit details
    Browse the repository at this point in the history
  5. ggml : add AVX512F SIMD (llama/6088)

    amiralimi authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    116d0a2 View commit details
    Browse the repository at this point in the history
  6. ggml:fix finding transfer queue family index error (llama/6094)

    Co-authored-by: GainLee <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    8172925 View commit details
    Browse the repository at this point in the history
  7. backend : offload large batches to GPU (llama/6083)

    * backend : offload large batches to GPU
    
    * fix hip
    
    * code cleanup
    
    * fix CUDA split buffers
    
    * Update ggml-backend-impl.h
    
    Co-authored-by: Johannes Gäßler <[email protected]>
    
    * cuda : fix memset without set_device
    
    * imatrix : remove sched affix from weight names
    
    * sched : add a new split if the current one has too many inputs
    reduce max inputs per split
    more cleanup
    
    * update backends
    
    ggml-ci
    
    ---------
    
    Co-authored-by: Johannes Gäßler <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    3edce07 View commit details
    Browse the repository at this point in the history
  8. Configuration menu
    Copy the full SHA
    d1023de View commit details
    Browse the repository at this point in the history
  9. Configuration menu
    Copy the full SHA
    dff7077 View commit details
    Browse the repository at this point in the history
  10. cuda : refactor to remove global resources (llama/6170)

    * cuda : refactor to remove global resources
    slaren authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    881e390 View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    5fa5e12 View commit details
    Browse the repository at this point in the history
  12. Configuration menu
    Copy the full SHA
    0b457ad View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    8339d37 View commit details
    Browse the repository at this point in the history
  14. Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (llam…

    …a/6183)
    
    * k_cache: be able to use Q5_0
    
    * k_cache: be able to use Q5_1 on CODA
    
    * k_cache: be able to use Q5_0 on Metal
    
    * k_cache: be able to use Q5_1 on Metal
    
    * k_cache: be able to use IQ4_NL - just CUDA for now
    
    * k_cache: be able to use IQ4_NL on Metal
    
    * k_cache: add newly added supported types to llama-bench and CUDA supports_op
    
    ---------
    
    Co-authored-by: Iwan Kawrakow <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    052559f View commit details
    Browse the repository at this point in the history
  15. ggml : same IQ4_NL quantization for CPU/CUDA/Metal (llama/6196)

    * Make quantize_row_iq4_nl do the same thing is quantization on CUDA
    
    * Make quantize_row_iq4_nl do the same thing is quantization on CUDA
    
    This time for real. backend-ops tests pass.
    
    * Now fix test-quantize-fns
    
    ---------
    
    Co-authored-by: Iwan Kawrakow <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    a9e1f05 View commit details
    Browse the repository at this point in the history
  16. Configuration menu
    Copy the full SHA
    f48bc15 View commit details
    Browse the repository at this point in the history
  17. Configuration menu
    Copy the full SHA
    3bd340f View commit details
    Browse the repository at this point in the history
  18. metal : pad n_ctx by 32 (llama/6177)

    * metal : require ne00 >= 128 for mat-mat kernels
    
    ggml-ci
    
    * llama : pad n_ctx by 32
    
    ggml-ci
    ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    5cca5b4 View commit details
    Browse the repository at this point in the history
  19. metal : proper assert for mat-mat memory alignment (llama/6225)

    * metal : proper assert for mat-mat memory alignment
    
    ggml-ci
    
    * readme : add notice about the bug fix
    
    * metal : fix the fix
    
    ggml-ci
    ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    e1e0d48 View commit details
    Browse the repository at this point in the history
  20. cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy…

    … (llama/6208)
    
    * cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy
    
    * add LLAMA_CUDA_NO_PEER_COPY to HIP build
    slaren authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    f45358a View commit details
    Browse the repository at this point in the history
  21. use _wfopen instead of fopen on Windows (llama/6248)

    also fix missing #defines before windows.h, and BPE LF token on MSVC
    cebtenzzre authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    c1ab06b View commit details
    Browse the repository at this point in the history
  22. offload op (llama/6217)

    * remove no USM methods
    
    * leave the schedule to ggml_backend_sched entirely
    airMeng authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    8f1c0a4 View commit details
    Browse the repository at this point in the history
  23. Fix heap corruption from wmode out-of-bound writes on windows (llama/…

    …6272)
    
    * would throw error on VS2022 on GGML_FREE(wmode)
    * wchar_t is usually 2 bytes, but malloc wants bytes
      * therefore `*wmode_p++ = (wchar_t)*mode;` could write off the end of the allocation
    * Fixes error possibly introduced by ggerganov/llama.cpp#6248
    TheFlipbook authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    3bc549e View commit details
    Browse the repository at this point in the history
  24. ggml : support AVX512VNNI (llama/6280)

    This change causes some quants (e.g. Q4_0, Q8_0) to go faster on some
    architectures (e.g. AMD Zen 4).
    jart authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    4a4ba1b View commit details
    Browse the repository at this point in the history
  25. Configuration menu
    Copy the full SHA
    39b7c81 View commit details
    Browse the repository at this point in the history
  26. tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (llama/6303)

    Co-authored-by: Iwan Kawrakow <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    2c3e572 View commit details
    Browse the repository at this point in the history
  27. Configuration menu
    Copy the full SHA
    1697add View commit details
    Browse the repository at this point in the history
  28. IQ1_M: 1.75 bpw quantization (llama/6302)

    * iq1_m: basics
    
    * iq1_m: basics-2
    
    * iq1_m: CUDA dequantize works
    
    Very 1st shot I get PPL = 9.76 for LLaMA-v2-7B.
    
    * iq1_m: separate shifts for each group of 8 in a block
    
    We get
    PPL(LLaMA-v2-7B ) = 9.2810
    PPL(LLaMA-v2-13B) = 6.8105
    
    Not bad, but slightly higher than
      sqrt(PPL(IQ1_S) * PPL(IQ2_XXS))
    which is the expected outcome given that IQ1_M is
    halfway between IQ1_S and IQ2_XXS in terms of bpw.
    From this, we would expect
     PPL = 9.14 for LLaMA-v2-7B
     PPL = 6.63 for LLaMA-v2-13B
    
    * iq1_m: go to 3-bit scales
    
    There is slight increase in PPL, but the 0.0625 bpw reduction
    in size is totally worth it.
    
    We now have
    PPL(LLaMA-v2-7B ) = 9.4469 at 1.96 bpw
    PPL(LLaMA-v2-13B) = 6.8717 at 1.93 bpw
    PPL(LLaMA-v2-70B) = 4.8568 at 1.85 bpw
    
    * iq1_m: scalar dot product
    
    * iq1_m: AVX2 dot product
    
    * iq1_m: very slightly faster AVX2 dot product
    
    * iq1_m: ARM_NEON dot product
    
    Works, but very slow (10.5 t/s)
    
    * iq1_m: Metal - dequantize works, dot product does not
    
    * iq1_m: Metal now works
    
    About the same performance as iq1_s.
    
    * iq1_m: minor
    
    * iq1_m: checking pure iq1_m quantization
    
    It is pretty bad: PPL(LLaMA-v2-7B) = 34 if we quantize output.weight
    with Q4_K.
    
    * iiq1_m: slightly faster ARM_NEON dot product
    
    10.5 t/s -> 11.65 t/s
    
    * iq1_m: faster ARM_NEON dot product
    
    11.65 t/s -> 14.9 t/s
    
    * iq1_m: another minor ARM_NEON dot product improvement
    
    14.9 -> 15.0 t/s
    
    * iq1_m: small PPL improvement via super-block scale adjustment
    
    After quantizing block scales redo the super-block scale fit.
    
    PPL(LLaMA-v2-7B ) = 9.3346
    PPL(LLaMA-v2-13B) = 6.8419
    PPL(LLaMA-v2-70B) = 4.8294
    PPL(Mistral-7B  ) = 8.1624
    
    * iq1_m: adapt to CUDA refactoring
    
    * iq1_m: remove unused variable
    
    We have progressed to warnings being errors.
    
    * iq1_m: add to backend-ops tests
    
    * iq1_m: fix Windows ARM
    
    * iq1_m: use common definition of iq1m_scale_t
    
    * cuda: assert -> NO_DEVICE_CODE
    
    * iq1_M: PR comments
    
    ---------
    
    Co-authored-by: Iwan Kawrakow <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    847bedc View commit details
    Browse the repository at this point in the history
  29. llama : greatly reduce output buffer memory usage (llama/6122)

    * llama : greatly reduce logits memory usage
    
    * llama : more compact state saving and reloading
    
    * llama : fix lctx.n_outputs not being set before building graph
    
    * perplexity : adapt to the logits API changes
    
    * perplexity : fix Winogrande, use correct logits for second choice start
    
    The first logits used to evaluate the second choice were not from
    the end of the common prefix; instead, they were the logits from the end
    of the first choice. This has been corrected.
    
    The previous implementation sometimes had outliers in the scores of
    choices for some tasks, and the logic to skip choices words
    in the log-likelihood evaluation probably was an attempt to reduce those,
    but it was complex and didn't quite seem to be the right thing.
    
    This is simpler now, and the outlier scores aren't there anymore.
    
    * perplexity : normalize spaces and punctuation in Winogrande sentences
    
    * llama : fix embedding conditions
    
    * llama : fix llama_get_embeddings_ith when the resulting id is 0
    
    * llama : fix wrong n_outputs in llama_set_inputs
    
    A mismatch happened when using a smaller n_ubatch than n_batch and then using
    llama_batch_get_one(). The decision of what n_outputs should be now almost
    fully depends on how lctx.n_outputs is set in llama_decode_internal.
    The conditions are simpler this way.
    
    * llama : when saving the state, recalculate n_outputs
    
    This ensures the correct number of outputs for the entire previous batch
    is stored in the session file, even when n_ubatch is smaller than n_batch.
    
    * llama : fix not-skipping outputs of non-causal models
    
    * llama : fix running a batch with n_outputs == 0
    
    It previously worked because lctx.inp_out_ids was not initialized,
    so it pointed to some garbage address which was somehow still valid when I
    ran my tests.
    
    * llama : keep same graph topology even when n_outputs == 0
    
    * ggml : saner ggml_can_repeat with empty tensors
    
    *  ggml : future-proof ggml_is_empty by using GGML_MAX_DIMS - 1
    
    * ggml : do not multi-thread ops returning empty tensors
    
    * ggml : make ggml_is_empty public and work with views
    
    * llama : use a vector for ctx->output_ids
    
    * llama : rework reallocation logic for llama_output_reserve
    
    Now comparing the actual size with the new total size of the output buffer
    to allow more efficient enabling and disabling of the embeddings
    and/or logits output in the future.
    
    * ggml : skip empty tensors in all backends
    
    * llama : fix llama_output_reserve nullptr deref when new_size is 0
    
    * perplexity : make Winogrande work as it does on master
    
    The problems with the Winogrande implementation will
    need to be fixed in a separate PR to ease review.
    
    * llama : clearer error messages for invalid logits or embeddings ids
    
    * llama : assert all models that can have inp_out_ids
    
    Since the graph topology is now constant, this presence check
    can be done even when there are no outputs.
    
    * llama : assert logits and embd buffers exist before writing to them
    
    * llama : handle errors from llama_output_reserve at call sites
    
    * perplexity : make hellaswag and multiple-choice outputs identical to master
    
    Due to how the KV cache is updated, the logprobs for tokens in a batch
    are very slightly affected by the other tokens present in the batch,
    so to make hellaswag and multiple-choice return exactly the same results
    as on master, the last token of each sequence needs to be evaluated
    even though its output is not used at all.
    
    This will probably be changed back in the future to make these benchmarks
    a tiny bit faster.
    
    * perplexity : fix division by zero when using less than 100 multiple-choice tasks
    
    * llama : allow loading state saved with a different ctx size
    
    When loading a session file, the context size is now only required to be
    at least enough to load the KV cells contained in that session file,
    instead of requiring to use exactly the same context size as when saving.
    
    Doing this enables the use-case of extending or shrinking the context size
    of a saved session.
    
    This breaks existing session files because the meaning of kv_buf_size
    is slightly changed (previously it was the size of the whole KV cache,
    now it's only the size of the saved part of it). This allows for
    finer-grained sanity checks when loading in an effort to keep kv_buf_size
    useful even when the kv_size is changed.
    
    * llama : minor
    
    ggml-ci
    
    * readme : update recent API changes, and warn about Vulkan
    
    ---------
    
    Co-authored-by: Georgi Gerganov <[email protected]>
    compilade and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    fe968c1 View commit details
    Browse the repository at this point in the history
  30. Make IQ1_M work for QK_K = 64 (llama/6327)

    * iq1_m: make it work for QK_K = 64 (WIP)
    
    * iq1_m: make it work for QK_K = 64 (scalar and AVX2)
    
    * iq1_m: QK_K = 64 seems to work on Metal and ARM_NEON
    
    ---------
    
    Co-authored-by: Iwan Kawrakow <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    0c19983 View commit details
    Browse the repository at this point in the history
  31. Fix batched impl for NVidia GPU (llama/6164)

    * Fix batched impl
    
    * Maintain previous behaviour for igpu
    
    * retrigger CI
    
    ---------
    
    Co-authored-by: Abhilash Majumder <[email protected]>
    2 people authored and ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    348f094 View commit details
    Browse the repository at this point in the history
  32. sync : llama.cpp

    ggml-ci
    ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    6937369 View commit details
    Browse the repository at this point in the history
  33. sync : adapt to CUDA changes (#0)

    ggml-ci
    ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    a4ce4c3 View commit details
    Browse the repository at this point in the history
  34. examples : fix CUBLAS leftovers (#0)

    ggml-ci
    ggerganov committed Mar 27, 2024
    Configuration menu
    Copy the full SHA
    fe3158e View commit details
    Browse the repository at this point in the history
  35. Configuration menu
    Copy the full SHA
    11d2033 View commit details
    Browse the repository at this point in the history