Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed install on Apple silicon #1258

Open
NinjAiBot opened this issue Nov 7, 2023 · 8 comments
Open

Failed install on Apple silicon #1258

NinjAiBot opened this issue Nov 7, 2023 · 8 comments
Assignees
Labels
area/build bug Something isn't working os/macOS

Comments

@NinjAiBot
Copy link

LocalAI version:
Most recent as of this report

Environment, CPU architecture, OS, and Version:
Screenshot 2023-11-07 at 23 05 10
1

Describe the bug
Running the installer from the official documentation fails for macOS running ARM64 fails at this part:

cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_METAL=OFF && cmake --build . --config Release

To Reproduce
Follow these steps on an M2 max mbp

Expected behavior
A successful install

Logs

Full error:

-- The C compiler identification is AppleClang 15.0.0.15000040
-- The CXX compiler identification is AppleClang 15.0.0.15000040
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Accelerate framework found
-- CMAKE_SYSTEM_PROCESSOR: arm64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
CMake Error at examples/grpc-server/CMakeLists.txt:13 (find_package):
  Could not find a package configuration file provided by "Protobuf" with any
  of the following names:

    ProtobufConfig.cmake
    protobuf-config.cmake

  Add the installation prefix of "Protobuf" to CMAKE_PREFIX_PATH or set
  "Protobuf_DIR" to a directory containing one of the above files.  If
  "Protobuf" provides a separate development package or SDK, be sure it has
  been installed.


-- Configuring incomplete, errors occurred!
make[1]: *** [grpc-server] Error 1
make: *** [backend/cpp/llama/grpc-server] Error 2
@NinjAiBot NinjAiBot added the bug Something isn't working label Nov 7, 2023
@Aisuko
Copy link
Collaborator

Aisuko commented Nov 7, 2023

Please check this comment #1197 (comment)

@renzo4web
Copy link

I was able to solve it by linking protoc again.

brew link protobuf
make clean
make BUILD_TYPE=metal build

@NinjAiBot
Copy link
Author

make BUILD_TYPE=metal build

Thanks mate. The original issue was solved by relinking protobuf but then it failed at another point for me after following the steps in #1197

Still not managed a successful install though as I described in detail here

Always ends up like this:

Screenshot 2023-11-09 at 10 52 34

@Aisuko
Copy link
Collaborator

Aisuko commented Nov 20, 2023

Is it succeed to build? It is ok we get the Warning while the building process.

@NinjAiBot
Copy link
Author

Is it succeed to build? It is ok we get the Warning while the building process.

No.
I've never managed to get the build to get past this point. It always just seems to stop at this point and never progress.

@vhscom
Copy link

vhscom commented Nov 27, 2023

Built with make and saw he same error OP saw which I worked around by:

BUILD_GRPC_FOR_BACKEND_LLAMA=on make backend/cpp/llama/grpc-server

And then rerunning the build target to complete compilation on an M2 chip running Fedora Asahi Remix.

@skehlet
Copy link

skehlet commented Jan 22, 2024

@NinjAiBot my output looks just like yours, and it's working for me—I just followed the next steps in Example: Build on mac to download ggml-gpt4all-j.bin and ask it how it was. Try it! Thanks @renzo4web for the brew link protobuf step which fixed the build for me.

@NinjAiBot
Copy link
Author

Just used LM-Studio instead. Was the easiest way to spin up a server to chat to a model which is what I needed to do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build bug Something isn't working os/macOS
Projects
None yet
Development

No branches or pull requests

6 participants