From c5ae5d08a56b82c17ef8121bc01221924576ad28 Mon Sep 17 00:00:00 2001 From: Trevor White Date: Tue, 21 Mar 2023 18:34:01 -0400 Subject: [PATCH] Added support for 30B weight. (#108) --- README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/README.md b/README.md index 4a6e5d85e..a92205472 100644 --- a/README.md +++ b/README.md @@ -45,6 +45,20 @@ Once you've downloaded the weights, you can run the following command to enter c ./chat -m ggml-alpaca-13b-q4.bin ``` +## Getting Started (30B) + +If you have more than 32GB of RAM (and a beefy CPU), you can use the higher quality 30B `alpaca-30B-ggml.bin` model. To download the weights, you can use + +``` +git clone https://huggingface.co/Pi3141/alpaca-30B-ggml +``` + +Once you've downloaded the weights, you can run the following command to enter chat + +``` +./chat -m ggml-model-q4_0.bin +``` + ## Building from Source (MacOS/Linux)