Find a file
Holden X 9adba26a1a
Polish README (#1)
* Upload eval figures

* fix some typos

* update model links
2023-12-16 00:43:16 +08:00
.devops ci : Cloud-V for RISC-V builds (#3160) 2023-09-15 11:06:56 +03:00
.github ci : use intel sde when ci cpu doesn't support avx512 (#3949) 2023-11-05 09:46:44 +02:00
ci save-load-state : fix example + add ci test (#3655) 2023-10-17 19:12:46 +03:00
cmake cmake : MSVC instruction detection (fixed up #809) (#3923) 2023-11-05 10:03:09 +02:00
common add gpu index opts and udpate doc commands (#2) 2023-12-16 00:42:08 +08:00
docs Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00
examples add gpu index opts and udpate doc commands (#2) 2023-12-16 00:42:08 +08:00
gguf-py merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
grammars Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00
media media : add logos and banners 2023-04-05 18:58:31 +03:00
models stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
pocs build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
prompts speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
scripts add gpu index opts and udpate doc commands (#2) 2023-12-16 00:42:08 +08:00
spm-headers swift : Package compile breaks due to ggml-metal.metal (#1831) 2023-06-15 20:47:04 +03:00
tests stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
.clang-tidy fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
.dockerignore docker : ignore Git files (#3314) 2023-10-02 11:53:53 +03:00
.ecrc Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
.editorconfig server : add a subtle loading animation to the edit box (#2466) 2023-09-04 16:28:55 +08:00
.flake8 hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
.gitignore merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
.pre-commit-config.yaml hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
build.zig build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
CMakeLists.txt add fallback for m chip & fix compiler bugs (#4) 2023-12-14 22:53:14 +08:00
codecov.yml cov : disable comment in PRs (#2989) 2023-09-03 13:19:01 +03:00
convert-baichuan-hf-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert-hf-to-gguf.py stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
convert-hf-to-powerinfer-gguf.py merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
convert-llama-ggml-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert-lora-to-ggml.py convert : fix python 3.8 support, modernize type annotations (#2916) 2023-08-31 08:02:23 +03:00
convert-persimmon-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert.py Full cpu (#5) 2023-12-15 21:29:10 +08:00
flake.lock flake.nix: fix for rocm 5.7 (#3853) 2023-10-31 19:24:03 +02:00
flake.nix flake.nix: fix for rocm 5.7 (#3853) 2023-10-31 19:24:03 +02:00
ggml-alloc.c sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-alloc.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend-impl.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend.c sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-cuda.cu Offloading tensors based on total VRAM budget and offloading policy (#6) 2023-12-15 23:46:51 +08:00
ggml-cuda.h Offloading tensors based on total VRAM budget and offloading policy (#6) 2023-12-15 23:46:51 +08:00
ggml-impl.h ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.h ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.m ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.metal ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-mpi.c ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) 2023-07-11 19:31:10 +03:00
ggml-mpi.h mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
ggml-opencl.cpp CLBlast: Add outer loops over src0 for broadcasting in mulmat 2023-10-20 22:30:52 +04:00
ggml-opencl.h Leverage mmap for offloading tensors to GPU (#1597) 2023-06-12 14:44:16 +02:00
ggml-quants.c support axpy q4_0 for loop 2023-12-12 15:03:10 +08:00
ggml-quants.h merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
ggml.c Full cpu (#5) 2023-12-15 21:29:10 +08:00
ggml.h merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
LICENSE Add LICENSE (#21) 2023-03-12 08:36:03 +02:00
llama.cpp add gpu index opts and udpate doc commands (#2) 2023-12-16 00:42:08 +08:00
llama.h add gpu index opts and udpate doc commands (#2) 2023-12-16 00:42:08 +08:00
Makefile Fix MacOS Sonoma model quantization (#4052) 2023-11-14 12:34:41 -05:00
mypy.ini scripts: Generalize convert scripts (#3838) 2023-11-09 11:09:29 +01:00
Package.swift ggml : quantization refactoring (#3833) 2023-10-29 18:32:28 +02:00
README.md Polish README (#1) 2023-12-16 00:43:16 +08:00
requirements.txt py : change version of numpy requirement to 1.24.4 (#3515) 2023-10-07 12:56:15 +03:00
run_with_preset.py llama : remove mtest (#3177) 2023-09-15 10:28:45 +03:00
SHA256SUMS Update SHA256SUMS with current hashes for models quantized using q4_0 (#1798) 2023-06-11 12:38:53 +03:00
unicode.h Work on the BPE tokenizer (#3252) 2023-10-03 09:16:26 +02:00

PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU


Demo 🔥

https://github.com/hodlen/PowerInfer/assets/34213478/b782ccc8-0a2a-42b6-a6aa-07b2224a66f7

The demo is running with a single 24G 4090 GPU, the model is Falcon (ReLU)-40B, and the precision is FP16.


Abstract

We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation. This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity. Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.

Feature

PowerInfer is a fast and easy-to-use inference engine for deploying LLM locally. Interestingly, we observe that in ReLU LLM, every neuron is an expert! And a small subset of neurons consistently contributes to the output. PowerInfer is fast with:

  • Exploiting the high locality in LLM infernece
  • Neuron-aware hybrid CPU/GPU sparse operator
  • Neuron granularity offloading

PowerInfer is flexible and easy to use with:

  • Integration with popular ReLU-sparse models
  • Low-latency serving locally with single consumer-grade GPU

PowerInfer supports the following models:

  • Falcon-40B model
  • Llama family models

The SparseLLM Team is currently converting the Mistral-7B model to a sparser version. Stay tuned!

Getting Started

Setup & Installation

Get the Code

git clone https://github.com/hodlen/PowerInfer
cd PowerInfer

Build

In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.

Using make on Linux or MacOS:

make

Using CMake:

  • If you have one GPU:
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
  • If you just CPU:
cmake -S . -B build
cmake --build build --config Release

Model Weights

Base Model GGUF Format Link Original Model
LLaMA(ReLU)-2-7B PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF SparseLLM/ReluLLaMA-7B
LLaMA(ReLU)-2-13B PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF SparseLLM/ReluLLaMA-13B
Falcon(ReLU)-40B PowerInfer/ReluFalcon-40B-PowerInfer-GGUF SparseLLM/ReluFalcon-40B

Inference

  • If you just have CPU:
  ./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt)
  • If you have CPU with one GPU:
./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt)

As for now, it requires a offline-generated "GPU index" file to split FFNs on GPU. If you want to try it, please use the following instruction to generate the GPU index file:

python scripts/export-gpu-split.py $(activation_count_path) $(output_idx_path) solver

Then, you can use the following instruction to run PowerInfer with GPU index:

./build/bin/main -m /PATH/TO/MODEL -n $(output_token_count) -t $(thread_num) -p $(prompt) --gpu-index $(split_path)

Evaluation

github-eval-4090

github-eval-2080ti-q4

PowerInfer achieves up to 11x and 8x speedup for FP16 and INT4 model!

TODOs

We will release the code and data in the following order, please stay tuned!

  • Release core code of PowerInfer, supporting Llama-2, Falcon-40B.
  • Release perplexity evaluation code
  • Support Metal for Mac
  • Release predictor training code
  • Support online split for FFN network
  • Support Multi-GPU

Citation

If you find PowerInfer useful or relevant to your project and research, please kindly cite our paper:

Stay tuned!

Acknowledgement

We are thankful for the easily modifiable operator library ggml and execution runtime provided by llama.cpp. We also extend our gratitude to THUNLP for their support of ReLU-based sparse models. We also appreciate the research of DejaVu, which inspires PowerInfer.