-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Core dumped after loading llama.cpp built with 'cmake' / building with 'make' works fine / CUBLAS enabled #1982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The make build detects your CPU features, for cmake you have to configure it yourself. It builds for AVX2 by default, and your CPU doesn't support it, you need to disable it. |
Could you provide me with details on how to disable AVX2 when configuring cmake? This is also helpful for others. Thanks a bunch! :) |
Add |
You're the fastest responder I ever encountered. :) Unfortunately this didn't fix the complete issue, but when starting main, it's at list printing out 2 lines before dumping again.
What else is my CPU not supporting? |
It's probably F16C. |
Unfortunately this doesn't change anything. It's really quite an annoying problem. Is there a way to figure out what "make" is using for CPU architecture? |
Does setting -DLLAMA_NATIVE=on help? That should pass -march=native which is the same thing that the makefile does. |
I tried that and thought about it earlier too. But it also was not the solution. But I actually and finally got it guys, thanks to you! :) The last missing piece was "FMA". So the following command is building llama.cpp with cmake successfully with cublas activated (following the instructions on the main project page just for clarity):
This now also allows building it successfully for
|
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I was expecting that
llama.cpp
will work with either building using make or cmake and CUBLAS.Current Behavior
Building both with make or cmake and CUBLAS is working. Executing the
main
binary will not work when building with cmake though.Environment and Context
Failure Information (for bugs)
Illegal instruction (core dumped)
This also affects installing
llama-cpp-python
using pip (with FORCE_CMAKE=1) and trying to use it or even trying to import a module into python. See issue I created earlier here: abetlen/llama-cpp-python#412Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
What is working:
The text was updated successfully, but these errors were encountered: