Readme: add HyperMink/inferenceable to HTTP server

This commit is contained in:
nobody 2024-05-29 15:52:03 +10:00
parent 504f0c340f
commit 993c5f3389

View file

@ -147,6 +147,8 @@ Typically finetunes of the base models below are supported as well.
[llama.cpp web server](./examples/server) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients. [llama.cpp web server](./examples/server) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients.
- [Inferenceable](https://github.com/HyperMink/inferenceable)
**Bindings:** **Bindings:**
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) - Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)