From 5d21a827869789b091b4ba4c2a55a65169e2312a Mon Sep 17 00:00:00 2001 From: vodkaslime <646329483@qq.com> Date: Mon, 20 Nov 2023 00:40:45 +0800 Subject: [PATCH] chore: resolve comments --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b5a6a52f1..03ae2f753 100644 --- a/README.md +++ b/README.md @@ -214,7 +214,7 @@ cd llama.cpp ### Build -In order to build llama.cpp you have different options. +In order to build llama.cpp you have three different options. - Using `make`: - On Linux or MacOS: @@ -320,7 +320,7 @@ mpirun -hostfile hostfile -n 3 ./main -m ./models/7B/ggml-model-q4_0.gguf -n 128 ### BLAS Build -Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently various different implementations of it: +Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently several different implementations of it: - #### Accelerate Framework: