Find a file
Holden X bb486b88e1
Online GPU slicing (#11)
* move gpu slicing python code into a module

* remove dead code in exporting gpu split

* streamline solver and export with one entrypoint

* new powerinfer.py module

* wip: invoke Python to generate gpu split on the fly

* wip: load gpu split on demand

* wip: new gpu split file format

* wip: generate and load new gpu idx format

* wip: generate and load gpu index on the fly

* minor: calculate total VRAM offloading via FFN splitting

* add option to disble gpu index

* bugfix

* wip: bug fix for segment fault

* bugfix

* bugfix and testing

* temporary fix for neuron factor in solving

* fix: generated gpu idx path

* Update README about gpu index
2023-12-20 10:09:43 +08:00
.devops ci : Cloud-V for RISC-V builds (#3160) 2023-09-15 11:06:56 +03:00
.github ci : use intel sde when ci cpu doesn't support avx512 (#3949) 2023-11-05 09:46:44 +02:00
ci save-load-state : fix example + add ci test (#3655) 2023-10-17 19:12:46 +03:00
cmake cmake : MSVC instruction detection (fixed up #809) (#3923) 2023-11-05 10:03:09 +02:00
common Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
docs Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00
examples Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
gguf-py Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
grammars Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00
media media : add logos and banners 2023-04-05 18:58:31 +03:00
models stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
pocs build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
powerinfer-py Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
prompts speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
scripts Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
spm-headers swift : Package compile breaks due to ggml-metal.metal (#1831) 2023-06-15 20:47:04 +03:00
tests stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
.clang-tidy fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
.dockerignore docker : ignore Git files (#3314) 2023-10-02 11:53:53 +03:00
.ecrc Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
.editorconfig server : add a subtle loading animation to the edit box (#2466) 2023-09-04 16:28:55 +08:00
.flake8 hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
.gitignore merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
.pre-commit-config.yaml hooks : setting up flake8 and pre-commit hooks (#1681) 2023-06-17 13:32:48 +03:00
build.zig build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
CMakeLists.txt add fallback for m chip & fix compiler bugs (#4) 2023-12-14 22:53:14 +08:00
codecov.yml cov : disable comment in PRs (#2989) 2023-09-03 13:19:01 +03:00
convert-baichuan-hf-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert-hf-to-gguf.py stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
convert-hf-to-powerinfer-gguf.py merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
convert-llama-ggml-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert-lora-to-ggml.py convert : fix python 3.8 support, modernize type annotations (#2916) 2023-08-31 08:02:23 +03:00
convert-persimmon-to-gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
convert.py Configurable sparse prediction threshold (#7) 2023-12-18 16:36:24 +08:00
flake.lock flake.nix: fix for rocm 5.7 (#3853) 2023-10-31 19:24:03 +02:00
flake.nix flake.nix: fix for rocm 5.7 (#3853) 2023-10-31 19:24:03 +02:00
ggml-alloc.c sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-alloc.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend-impl.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend.c sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-backend.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
ggml-cuda.cu Configurable sparse prediction threshold (#7) 2023-12-18 16:36:24 +08:00
ggml-cuda.h Configurable sparse prediction threshold (#7) 2023-12-18 16:36:24 +08:00
ggml-impl.h ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.h ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.m ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-metal.metal ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060) 2023-11-13 16:55:52 +02:00
ggml-mpi.c ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) 2023-07-11 19:31:10 +03:00
ggml-mpi.h mpi : add support for distributed inference via MPI (#2099) 2023-07-10 18:49:56 +03:00
ggml-opencl.cpp CLBlast: Add outer loops over src0 for broadcasting in mulmat 2023-10-20 22:30:52 +04:00
ggml-opencl.h Leverage mmap for offloading tensors to GPU (#1597) 2023-06-12 14:44:16 +02:00
ggml-quants.c support axpy q4_0 for loop 2023-12-12 15:03:10 +08:00
ggml-quants.h merge PowerInfer impl from the internal codebase 2023-12-12 11:08:10 +08:00
ggml.c Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
ggml.h Configurable sparse prediction threshold (#7) 2023-12-18 16:36:24 +08:00
LICENSE Update LICENSE and TODOs in README (#14) 2023-12-19 16:23:10 +08:00
llama.cpp Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
llama.h Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
Makefile Fix MacOS Sonoma model quantization (#4052) 2023-11-14 12:34:41 -05:00
mypy.ini scripts: Generalize convert scripts (#3838) 2023-11-09 11:09:29 +01:00
Package.swift ggml : quantization refactoring (#3833) 2023-10-29 18:32:28 +02:00
README.md Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
requirements.txt Online GPU slicing (#11) 2023-12-20 10:09:43 +08:00
run_with_preset.py llama : remove mtest (#3177) 2023-09-15 10:28:45 +03:00
SHA256SUMS Update SHA256SUMS with current hashes for models quantized using q4_0 (#1798) 2023-06-11 12:38:53 +03:00
unicode.h Work on the BPE tokenizer (#3252) 2023-10-03 09:16:26 +02:00

PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU

TL;DR

PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.

Demo 🔥

https://github.com/SJTU-IPADS/PowerInfer/assets/34213478/fe441a42-5fce-448b-a3e5-ea4abb43ba23

PowerInfer v.s. llama.cpp on a single RTX 4090(24G) running Falcon(ReLU)-40B-FP16 with a 11x speedup!

Both PowerInfer and llama.cpp were running on the same hardware and fully utilized VRAM on RTX 4090.

Abstract

We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation.

This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity.

Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy.

Features

PowerInfer is a high-speed and easy-to-use inference engine for deploying LLMs locally.

PowerInfer is fast with:

  • Locality-centric design: Utilizes sparse activation and 'hot'/'cold' neuron concept for efficient LLM inference, ensuring high speed with lower resource demands.
  • Hybrid CPU/GPU Utilization: Seamlessly integrates memory/computation capabilities of CPU and GPU for balanced workload and faster processing.

PowerInfer is flexible and easy to use with:

  • Easy Integration: Compatible with popular ReLU-sparse models as accurate as their dense counterparts.
  • Local Deployment Ease: Designed and deeply optimized for local deployment on consumer-grade hardwares, enabling low-latency LLM inference and serving on a single GPU.
  • Backward Compatibility: While distinct from llama.cpp, you can make use of most of examples/ the same way as llama.cpp such as server and batched generation. PowerInfer also supports inference with llama.cpp's model weights for compatibility purpose, but there will be no performance gain.

You can use these models with PowerInfer today:

  • Falcon-40B
  • Llama2 family

We have tested PowerInfer on the following platforms:

  • x86-64 CPU (with AVX2 instructions) on Linux
  • x86-64 CPU and NVIDIA GPU on Linux
  • Apple M Chips on macOS (As we do not optimize for Mac, the performance improvement is not significant now.)

And new features coming soon:

  • Mistral-7B model
  • Online fine-grained FFN offloading to GPU
  • Metal backend for sparse inference on macOS

Getting Started

Setup and Installation

Get the Code

git clone https://github.com/SJTU-IPADS/PowerInfer
cd PowerInfer
pip install -r requirements.txt # install Python helpers' dependencies

Build

In order to build PowerInfer you have two different options. These commands are supposed to be run from the root directory of the project.

Using CMake(3.13+) on Linux or macOS:

  • If you have an NVIDIA GPU:
cmake -S . -B build -DLLAMA_CUBLAS=ON
cmake --build build --config Release
  • If you just CPU:
cmake -S . -B build
cmake --build build --config Release

Model Weights

PowerInfer models are stored in a special format called PowerInfer GGUF based on GGUF format, consisting of both LLM weights and predictor weights. You can obtain PowerInfer GGUF weights at *.powerinfer.gguf as well as profiled model activation statistics under activation/ for 'hot'-neuron offloading from each Hugging Face model repo under "PowerInfer GGUF Format" column. You can also convert them from the original model weights and predictor weights.

Base Model PowerInfer GGUF Format Original Model Predictor
LLaMA(ReLU)-2-7B PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF SparseLLM/ReluLLaMA-7B PowerInfer/ReluLLaMA-7B-Predictor
LLaMA(ReLU)-2-13B PowerInfer/ReluLLaMA-13B-PowerInfer-GGUF SparseLLM/ReluLLaMA-13B PowerInfer/ReluLLaMA-13B-Predictor
Falcon(ReLU)-40B PowerInfer/ReluFalcon-40B-PowerInfer-GGUF SparseLLM/ReluFalcon-40B PowerInfer/ReluFalcon-40B-Predictor
LLaMA(ReLU)-2-70B PowerInfer/ReluLLaMA-70B-PowerInfer-GGUF SparseLLM/ReluLLaMA-70B PowerInfer/ReluLLaMA-70B-Predictor

Inference

For CPU-only and CPU-GPU hybrid inference with all available VRAM, you can use the following instructions to run PowerInfer:

./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt
# ./build/bin/main -m ./ReluFalcon-40B-PowerInfer-GGUF/falcon-40b-relu.q4.powerinfer.gguf -n 128 -t 8 -p "Once upon a time"

If you want to limit the VRAM usage of GPU:

./build/bin/main -m /PATH/TO/MODEL -n $output_token_count -t $thread_num -p $prompt --vram-budget $vram_gb
# ./build/bin/main -m ./ReluLLaMA-7B-PowerInfer-GGUF/llama-7b-relu.powerinfer.gguf -n 128 -t 8 -p "Once upon a time" --vram-budget 8

Under CPU-GPU hybrid inference, PowerInfer will automatically offload all dense activation blocks to GPU and split FFN on GPU if possible.

Evaluation

github-eval-4090

github-eval-2080ti-q4

PowerInfer achieves up to 11x and 8x speedup for FP16 and INT4 models!

FAQs

  1. What if I encountered CUDA_ERROR_OUT_OF_MEMORY?
    • You can try to run with --reset-gpu-index argument to rebuild GPU index for this model to avoid any stale cache.
    • Due to our current implementation, model offloading might not be accurate as expected. You can try with --vram-budget with a slightly lower value or --disable-gpu-index to disable FFN offloading.
  2. What if...
    • Issues are welcomed! Please feel free to open an issue and attach your running environment and running parameters. We will try our best to help you.

TODOs

We will release the code and data in the following order, please stay tuned!

  • Release core code of PowerInfer, supporting Llama-2, Falcon-40B.
  • Support Mistral-7B
  • Support Windows
  • Support text-generation-webui
  • Release perplexity evaluation code
  • Support Metal for Mac
  • Release code for OPT models
  • Release predictor training code
  • Support online split for FFN network
  • Support Multi-GPU

Paper and Citation

More technical details can be found in our paper.

If you find PowerInfer useful or relevant to your project and research, please kindly cite our paper:

@techreport{song2023powerinfer,
  author      = {Yixin Song and Zeyu Mi and Haotong Xie and Haibo Chen},
  title       = {PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU},
  institution = {Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University},
  year        = {2023}
}

Acknowledgement

We are thankful for the easily modifiable operator library ggml and execution runtime provided by llama.cpp. We also extend our gratitude to THUNLP for their support of ReLU-based sparse models. We also appreciate the research of Deja Vu, which inspires PowerInfer.