Fix issues in README on feedback (#15)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

---------

Co-authored-by: Jeremy Song <76689794+YixinSong-e@users.noreply.github.com>
This commit is contained in:
Holden X 2023-12-19 17:09:33 +08:00 committed by GitHub
parent e3b4b85caa
commit ded0613bd4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -5,7 +5,7 @@ PowerInfer is a CPU/GPU LLM inference engine leveraging **activation locality**
## Demo 🔥 ## Demo 🔥
https://github.com/SJTU-IPADS/PowerInfer/assets/34213478/d26ae05b-d0cf-40b6-8788-bda3fe447e28 https://github.com/SJTU-IPADS/PowerInfer/assets/34213478/fe441a42-5fce-448b-a3e5-ea4abb43ba23
PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP16 with a 11x speedup! PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP16 with a 11x speedup!
@ -75,8 +75,8 @@ cd PowerInfer
### Build ### Build
In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project. In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.
Using `CMake` on Linux or macOS: Using `CMake`(3.13+) on Linux or macOS:
* If you have one GPU: * If you have an NVIDIA GPU:
```bash ```bash
cmake -S . -B build -DLLAMA_CUBLAS=ON cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release cmake --build build --config Release
@ -109,14 +109,7 @@ If you want to limit the VRAM usage of GPU:
./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb ./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb
``` ```
As for now, it requires an offline-generated "GPU index" file to split FFNs on GPU. If you want to try it, please use the following instructions to generate the GPU index file: As for now, it requires an offline-generated "GPU index" file to split FFNs on GPU. And we found these files are hard to maintain and distribute. We will ship automatic FFN split based on VRAM capacity via [#11](https://github.com/SJTU-IPADS/PowerInfer/pull/11) very soon.
```bash
python scripts/export-gpu-split.py $activation_count_path $output_idx_path solver
```
Then, you can use the following instructions to run PowerInfer with GPU index:
```bash
./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --gpu-index $split_path
```
## Evaluation ## Evaluation
@ -131,6 +124,8 @@ We will release the code and data in the following order, please stay tuned!
- [x] Release core code of PowerInfer, supporting Llama-2, Falcon-40B. - [x] Release core code of PowerInfer, supporting Llama-2, Falcon-40B.
- [ ] Support Mistral-7B - [ ] Support Mistral-7B
- [ ] Support Windows
- [ ] Support text-generation-webui
- [ ] Release perplexity evaluation code - [ ] Release perplexity evaluation code
- [ ] Support Metal for Mac - [ ] Support Metal for Mac
- [ ] Release code for OPT models - [ ] Release code for OPT models