From 993c5f33895b76e0d91c09651f53582ad13edd33 Mon Sep 17 00:00:00 2001 From: nobody Date: Wed, 29 May 2024 15:52:03 +1000 Subject: [PATCH] Readme: add HyperMink/inferenceable to HTTP server --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 1cab7f19d..35072c35f 100644 --- a/README.md +++ b/README.md @@ -147,6 +147,8 @@ Typically finetunes of the base models below are supported as well. [llama.cpp web server](./examples/server) is a lightweight [OpenAI API](https://github.com/openai/openai-openapi) compatible HTTP server that can be used to serve local models and easily connect them to existing clients. +- [Inferenceable](https://github.com/HyperMink/inferenceable) + **Bindings:** - Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)