From 775a299a4bf65c5221ceec24a09b5202ef73d6b0 Mon Sep 17 00:00:00 2001 From: Jesse Johnson Date: Wed, 5 Jul 2023 16:00:22 +0000 Subject: [PATCH] Remove duplicate OAI instructions --- examples/server/README.md | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/examples/server/README.md b/examples/server/README.md index 4af433f2d..b8172119a 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -208,22 +208,6 @@ openai.api_base = "http://:port" Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API -### API like OAI - -API example using Python Flask: [api_like_OAI.py](api_like_OAI.py) -This example must be used with server.cpp - -```sh -python api_like_OAI.py -``` - -After running the API server, you can use it in Python by setting the API base URL. -```python -openai.api_base = "http://:port" -``` - -Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API - ### Extending the Web Front End The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below: