diff --git a/examples/server/README.md b/examples/server/README.md index 4af433f2d..b8172119a 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -208,22 +208,6 @@ openai.api_base = "http://:port" Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API -### API like OAI - -API example using Python Flask: [api_like_OAI.py](api_like_OAI.py) -This example must be used with server.cpp - -```sh -python api_like_OAI.py -``` - -After running the API server, you can use it in Python by setting the API base URL. -```python -openai.api_base = "http://:port" -``` - -Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API - ### Extending the Web Front End The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below: