From 5758e9f09bdb1db460c3728af4f441aa328ade74 Mon Sep 17 00:00:00 2001 From: Randall Fitzgerald Date: Fri, 2 Jun 2023 08:31:12 -0700 Subject: [PATCH] Removed embedding from flags. --- examples/server/README.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/examples/server/README.md b/examples/server/README.md index d5ca24cf8..334d53fa9 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -8,11 +8,9 @@ Command line options: - `-m FNAME`, `--model FNAME`: Specify the path to the LLaMA model file (e.g., `models/7B/ggml-model.bin`). - `-c N`, `--ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference. - `-ngl N`, `--n-gpu-layers N`: When compiled with appropriate support (currently CLBlast or cuBLAS), this option allows offloading some layers to the GPU for computation. Generally results in increased performance. -- `--embedding`: Enable the embedding mode. **Completion function doesn't work in this mode**. - `--host`: Set the hostname or ip address to listen. Default `127.0.0.1`; - `--port`: Set the port to listen. Default: `8080`. - ## Quick Start To get started right away, run the following command, making sure to use the correct path for the model you have: