From 34a9ef6c6c6ba28fa755eee55ee5049b7399eb6d Mon Sep 17 00:00:00 2001 From: vodkaslime <646329483@qq.com> Date: Sun, 19 Nov 2023 17:20:18 +0800 Subject: [PATCH] fix: readme --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 4de064765..b5a6a52f1 100644 --- a/README.md +++ b/README.md @@ -214,7 +214,7 @@ cd llama.cpp ### Build -In order to build llama.cpp you have three different options. +In order to build llama.cpp you have different options. - Using `make`: - On Linux or MacOS: @@ -320,7 +320,7 @@ mpirun -hostfile hostfile -n 3 ./main -m ./models/7B/ggml-model-q4_0.gguf -n 128 ### BLAS Build -Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently three different implementations of it: +Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently various different implementations of it: - #### Accelerate Framework: