Readme reorg (#12)

* add TLDR and hw support

* enrich features section

* update model weights

* minor on README commands

* minor on features

* Update README.md

---------

Co-authored-by: Holden <34213478+hodlen@users.noreply.github.com>
This commit is contained in:
Jeremy Song 2023-12-19 07:48:31 +08:00 committed by GitHub
parent 603c771974
commit 53268cbb52
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -1,5 +1,7 @@
# PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU # PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
---
## TL;DR
PowerInfer is a CPU/GPU LLM inference engine leveraging **activation locality** for your device.
## Demo 🔥 ## Demo 🔥
@ -9,12 +11,12 @@ PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP1
<sub>Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.</sub> <sub>Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.</sub>
---
## Abstract ## Abstract
We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC)
equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high **locality**
inherent in LLM inference, characterized by a power-law distribution in neuron activation. inherent in LLM inference, characterized by a power-law distribution in neuron activation.
This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated
across inputs, while the majority, cold neurons, vary based on specific inputs. across inputs, while the majority, cold neurons, vary based on specific inputs.
PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine:
@ -22,36 +24,37 @@ hot-activated neurons are preloaded onto the GPU for fast access, while cold-act
on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers.
PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, PowerInfer further integrates adaptive predictors and neuron-aware sparse operators,
optimizing the efficiency of neuron activation and computational sparsity. optimizing the efficiency of neuron activation and computational sparsity.
Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU,
only 18\% lower than that achieved by a top-tier server-grade A100 GPU. only 18\% lower than that achieved by a top-tier server-grade A100 GPU.
This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.
## Feature ## Features
PowerInfer is a high-speed and easy-to-use inference engine for deploying LLM locally. Interestingly, we observe that in ReLU LLM, every neuron is an expert! And a small subset of neurons consistently contributes to the output. PowerInfer is a high-speed and easy-to-use inference engine for deploying LLMs locally.
PowerInfer is fast with: PowerInfer is fast with:
- Exploiting the high locality in LLM inference - **Locality-centric design**: Utilizes sparse activation and 'hot'/'cold' neuron concept for efficient LLM inference, ensuring high speed with lower resource demands.
- Neuron-aware hybrid CPU/GPU sparse operator - **Hybrid CPU/GPU Utilization**: Seamlessly integrates memory/computation capabilities of CPU and GPU for balanced workload and faster processing.
- Neuron granularity offloading
PowerInfer is flexible and easy to use with: PowerInfer is flexible and easy to use with:
- Integration with popular [ReLU-sparse models](https://huggingface.co/SparseLLM) - **Easy Integration**: Compatible with popular [ReLU-sparse models](https://huggingface.co/SparseLLM) as accurate as their dense counterparts.
- Low-latency serving locally with one single consumer-grade GPU - **Local Deployment Ease**: Designed and deeply optimized for local deployment on consumer-grade hardwares, enabling low-latency LLM inference and serving on a single GPU.
- **Backward Compatibility**: While distinct from llama.cpp, you can make use of most of `examples/` the same way as llama.cpp such as server and batched generation. PowerInfer also supports inference with llama.cpp's model weights for compatibility purpose, but there will be no performance gain.
PowerInfer supports the following models: You can use these models with PowerInfer today:
- Falcon-40B model - Falcon-40B
- Llama family models - Llama2 family
Now PowerInfer supports the following architectures: We have tested PowerInfer on the following platforms:
- Intel CPU with AVX2 instructions - x86-64 CPU (with AVX2 instructions) on Linux
- Nvidia GPU - x86-64 CPU and NVIDIA GPU on Linux
- Apple M Chips on macOS (As we do not optimize for Mac, the performance improvement is not significant now.)
## Getting Started ## Getting Started
- [Installation](##setup--installation) - [Installation](##setup--installation)
@ -67,7 +70,7 @@ cd PowerInfer
### Build ### Build
In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project. In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.
Using `make` on Linux or MacOS: Using `make` on Linux or macOS:
```bash ```bash
make make
``` ```
@ -85,31 +88,34 @@ cmake --build build --config Release
``` ```
## Model Weights ## Model Weights
As for now, we have not released the predictor training code, we suggest you download the sparse model from huggingface in the following link.
| Base Model | GGUF Format Link | Original Model | PowerInfer models are stored in a special format called *PowerInfer GGUF* based on GGUF format, consisting of both LLM weights and predictor weights. You can download PowerInfer GGUF weights from Hugging Face or convert them from the original model weights and predictor weights.
|------------|------------------|----------------|
| LLaMA(ReLU)-2-7B | [PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) | | Base Model | PowerInfer GGUF Format | Original Model | Predictor |
| LLaMA(ReLU)-2-13B | [PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) | |------------|------------------|----------------|---------------------|
| Falcon(ReLU)-40B | [PowerInfer/ReluFalcon-40B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluFalcon-40B-PowerInfer-GGUF) | [SparseLLM/ReluFalcon-40B](https://huggingface.co/SparseLLM/ReluFalcon-40B) | | LLaMA(ReLU)-2-7B | [PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) | [PowerInfer/ReluLLaMA-7B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-7B-Predictor)
| LLaMA(ReLU)-2-70B | [PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B) | | LLaMA(ReLU)-2-13B | [PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-13B](https://huggingface.co/SparseLLM/ReluLLaMA-13B) | [PowerInfer/ReluLLaMA-13B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-13B-Predictor)
| Falcon(ReLU)-40B | [PowerInfer/ReluFalcon-40B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluFalcon-40B-PowerInfer-GGUF) | [SparseLLM/ReluFalcon-40B](https://huggingface.co/SparseLLM/ReluFalcon-40B) | [PowerInfer/ReluFalcon-40B-Predictor](https://huggingface.co/PowerInfer/ReluFalcon-40B-Predictor)
| LLaMA(ReLU)-2-70B | [PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF](https://huggingface.co/PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF) | [SparseLLM/ReluLLaMA-70B](https://huggingface.co/SparseLLM/ReluLLaMA-70B) | [PowerInfer/ReluLLaMA-70B-Predictor](https://huggingface.co/PowerInfer/ReluLLaMA-70B-Predictor)
## Inference ## Inference
- If you just have CPU:
For CPU-only and CPU-GPU hybrid inference with all available VRAM, you can use the following instructions to run PowerInfer:
```bash ```bash
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) ./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
``` ```
- If you have CPU with one GPU: If you want to limit the VRAM usage of GPU:
```bash ```bash
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) --vram-budget $(GPU_VRAM_OFFLOADING) ./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb
``` ```
As for now, it requires an offline-generated "GPU index" file to split FFNs on GPU. If you want to try it, please use the following instructions to generate the GPU index file: As for now, it requires an offline-generated "GPU index" file to split FFNs on GPU. If you want to try it, please use the following instructions to generate the GPU index file:
```bash ```bash
python scripts/export-gpu-split.py $(activation_count_path) $(output_idx_path) solver python scripts/export-gpu-split.py $activation_count_path $output_idx_path solver
``` ```
Then, you can use the following instructions to run PowerInfer with GPU index: Then, you can use the following instructions to run PowerInfer with GPU index:
```bash ```bash
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) --gpu-index $(split_path) ./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --gpu-index $split_path
``` ```
## Evaluation ## Evaluation