New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'ggml-common.h' file not found when running as shared library and using metal #5977
Comments
Can you try to build a xcrun -sdk macosx metal -O3 -c ggml-metal.metal -o ggml-metal.air
xcrun -sdk macosx metallib ggml-metal.air -o default.metallib |
Can confirm building and using .metallib works |
I think it may be a good idea to build a |
FWIW - the {
description = "A basic flake with a shell";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.llama-cpp = {
url = "github:ggerganov/llama.cpp";
inputs.nixpkgs.follows = "nixpkgs";
};
outputs = { nixpkgs, flake-utils, llama-cpp, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
overlays = [ (llama-cpp.overlays.default) ];
pkgs = import nixpkgs {
inherit system overlays;
};
in
{
devShells.default = pkgs.mkShell {
packages = [ pkgs.llamaPackages.llama-cpp ];
};
});
} I get: ggml_metal_init: found device: Apple M1 Max
ggml_metal_init: picking default device: Apple M1 Max
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/nix/store/i8agpfz2c7xrs8m2fdpr50i9jjsfgjq0-llama-cpp-metalkit-0.0.0/bin/ggml-metal.metal'
ggml_metal_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:4:10: fatal error: 'ggml-common.h' file not found
#include "ggml-common.h"
^~~~~~~~~~~~~~~
" UserInfo={NSLocalizedDescription=program_source:4:10: fatal error: 'ggml-common.h' file not found
#include "ggml-common.h"
^~~~~~~~~~~~~~~
} This can be fixed by dropping ggml-common.h in the store location and running the commands above in the store location, in my case Presumably @giladgd's suggestion would go towards fixing this. |
Can you guys give #6015 a try and report any problems that you encounter with it |
Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bug.
If the bug concerns the server, please try to reproduce it first using the server test scenario framework.
System: M2 Pro Sonoma v14.3.1
Building on master branch
Steps to reproduce:
Error message:
The text was updated successfully, but these errors were encountered: