From ada487b2832f3d4e14054dc5851c9c4dd046525f Mon Sep 17 00:00:00 2001 From: ngxson Date: Sun, 24 Mar 2024 21:43:29 +0100 Subject: [PATCH] update docs --- examples/server/README.md | 2 +- examples/server/utils.hpp | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/server/README.md b/examples/server/README.md index dfea2b905..49121a460 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -360,7 +360,7 @@ Notice that each `probs` is an array of length `n_probs`. - `default_generation_settings` - the default generation settings for the `/completion` endpoint, has the same fields as the `generation_settings` response object from the `/completion` endpoint. - `total_slots` - the total number of slots for process requests (defined by `--parallel` option) -- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only ChatML-tuned models, such as Dolphin, OpenOrca, OpenHermes, OpenChat-3.5, etc can be used with this endpoint. +- **POST** `/v1/chat/completions`: OpenAI-compatible Chat Completions API. Given a ChatML-formatted json description in `messages`, it returns the predicted completion. Both synchronous and streaming mode are supported, so scripted and interactive applications work fine. While no strong claims of compatibility with OpenAI API spec is being made, in our experience it suffices to support many apps. Only model with [supported chat template](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template) can be used optimally with this endpoint. By default, ChatML template will be used. *Options:* diff --git a/examples/server/utils.hpp b/examples/server/utils.hpp index 82f8bde29..7d9ab622b 100644 --- a/examples/server/utils.hpp +++ b/examples/server/utils.hpp @@ -371,7 +371,7 @@ static json oaicompat_completion_params_parse( llama_params["stop"] = json_value(body, "stop", json::array()); } // Some chat templates don't use EOS token to stop generation - // We must add their end sequences among stop words + // We must add their end sequences to list of stop words llama_params["stop"].push_back("<|im_end|>"); // chatml llama_params["stop"].push_back(""); // gemma