Skip to content

Commit 2d3c351

Browse files
Jeximoslaren
authored andcommitted
Tidy Android Instructions README.md (ggml-org#7016)
* Tidy Android Instructions README.md Remove CLBlast instructions(outdated), added OpenBlas. * don't assume git is installed Added apt install git, so that git clone works * removed OpenBlas Linked to Linux build instructions * fix typo Remove word "run" * correct style Co-authored-by: slaren <[email protected]> * correct grammar Co-authored-by: slaren <[email protected]> * delete reference to Android API * remove Fdroid reference, link directly to Termux Fdroid is not required Co-authored-by: slaren <[email protected]> * Update README.md Co-authored-by: slaren <[email protected]> --------- Co-authored-by: slaren <[email protected]>
1 parent b152b42 commit 2d3c351

File tree

1 file changed

+8
-36
lines changed

1 file changed

+8
-36
lines changed

README.md

Lines changed: 8 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -977,48 +977,20 @@ Here is a demo of an interactive session running on Pixel 5 phone:
977977
978978
https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4
979979
980-
#### Building the Project using Termux (F-Droid)
981-
Termux from F-Droid offers an alternative route to execute the project on an Android device. This method empowers you to construct the project right from within the terminal, negating the requirement for a rooted device or SD Card.
982-
983-
Outlined below are the directives for installing the project using OpenBLAS and CLBlast. This combination is specifically designed to deliver peak performance on recent devices that feature a GPU.
984-
985-
If you opt to utilize OpenBLAS, you'll need to install the corresponding package.
980+
#### Build on Android using Termux
981+
[Termux](https://github.com/termux/termux-app#installation) is an alternative to execute `llama.cpp` on an Android device (no root required).
986982
```
987-
apt install libopenblas
983+
apt update && apt upgrade -y
984+
apt install git
988985
```
989986
990-
Subsequently, if you decide to incorporate CLBlast, you'll first need to install the requisite OpenCL packages:
987+
It's recommended to move your model inside the `~/` directory for best performance:
991988
```
992-
apt install ocl-icd opencl-headers opencl-clhpp clinfo
993-
```
994-
995-
In order to compile CLBlast, you'll need to first clone the respective Git repository, which can be found at this URL: https://github.com/CNugteren/CLBlast. Alongside this, clone this repository into your home directory. Once this is done, navigate to the CLBlast folder and execute the commands detailed below:
996-
```
997-
cmake .
998-
make
999-
cp libclblast.so* $PREFIX/lib
1000-
cp ./include/clblast.h ../llama.cpp
1001-
```
1002-
1003-
Following the previous steps, navigate to the LlamaCpp directory. To compile it with OpenBLAS and CLBlast, execute the command provided below:
989+
cd storage/downloads
990+
mv model.gguf ~/
1004991
```
1005-
cp /data/data/com.termux/files/usr/include/openblas/cblas.h .
1006-
cp /data/data/com.termux/files/usr/include/openblas/openblas_config.h .
1007-
make LLAMA_CLBLAST=1 //(sometimes you need to run this command twice)
1008-
```
1009-
1010-
Upon completion of the aforementioned steps, you will have successfully compiled the project. To run it using CLBlast, a slight adjustment is required: a command must be issued to direct the operations towards your device's physical GPU, rather than the virtual one. The necessary command is detailed below:
1011-
```
1012-
GGML_OPENCL_PLATFORM=0
1013-
GGML_OPENCL_DEVICE=0
1014-
export LD_LIBRARY_PATH=/vendor/lib64:$LD_LIBRARY_PATH
1015-
```
1016-
1017-
(Note: some Android devices, like the Zenfone 8, need the following command instead - "export LD_LIBRARY_PATH=/system/vendor/lib64:$LD_LIBRARY_PATH". Source: https://www.reddit.com/r/termux/comments/kc3ynp/opencl_working_in_termux_more_in_comments/ )
1018-
1019-
For easy and swift re-execution, consider documenting this final part in a .sh script file. This will enable you to rerun the process with minimal hassle.
1020992
1021-
Place your desired model into the `~/llama.cpp/models/` directory and execute the `./main (...)` script.
993+
[Follow the Linux build instructions](https://github.com/ggerganov/llama.cpp#build) to build `llama.cpp`.
1022994
1023995
### Docker
1024996

0 commit comments

Comments
 (0)