Readme: add HyperMink/inferenceable to HTTP server
This commit is contained in:
parent
504f0c340f
commit
993c5f3389
1 changed files with 2 additions and 0 deletions
|
@ -147,6 +147,8 @@ Typically finetunes of the base models below are supported as well.
|
||||||
|
|
||||||
[llama.cpp web server](./examples/server) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients.
|
[llama.cpp web server](./examples/server) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients.
|
||||||
|
|
||||||
|
- [Inferenceable](https://github.com/HyperMink/inferenceable)
|
||||||
|
|
||||||
**Bindings:**
|
**Bindings:**
|
||||||
|
|
||||||
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue