Added support for 30B weight. (#108)
This commit is contained in:
parent
81bd894c51
commit
c5ae5d08a5
1 changed files with 14 additions and 0 deletions
14
README.md
14
README.md
|
@ -45,6 +45,20 @@ Once you've downloaded the weights, you can run the following command to enter c
|
||||||
./chat -m ggml-alpaca-13b-q4.bin
|
./chat -m ggml-alpaca-13b-q4.bin
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Getting Started (30B)
|
||||||
|
|
||||||
|
If you have more than 32GB of RAM (and a beefy CPU), you can use the higher quality 30B `alpaca-30B-ggml.bin` model. To download the weights, you can use
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://huggingface.co/Pi3141/alpaca-30B-ggml
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you've downloaded the weights, you can run the following command to enter chat
|
||||||
|
|
||||||
|
```
|
||||||
|
./chat -m ggml-model-q4_0.bin
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Building from Source (MacOS/Linux)
|
## Building from Source (MacOS/Linux)
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue