docfix: server readme: quantum models -> quantized models.
This commit is contained in:
parent
0ab192f500
commit
e18281940e
1 changed files with 1 additions and 1 deletions
|
@ -5,7 +5,7 @@ Fast, lightweight, pure C/C++ HTTP server based on [httplib](https://github.com/
|
||||||
Set of LLM REST APIs and a simple web front end to interact with llama.cpp.
|
Set of LLM REST APIs and a simple web front end to interact with llama.cpp.
|
||||||
|
|
||||||
**Features:**
|
**Features:**
|
||||||
* LLM inference of F16 and quantum models on GPU and CPU
|
* LLM inference of F16 and quantized models on GPU and CPU
|
||||||
* [OpenAI API](https://github.com/openai/openai-openapi) compatible chat completions and embeddings routes
|
* [OpenAI API](https://github.com/openai/openai-openapi) compatible chat completions and embeddings routes
|
||||||
* Parallel decoding with multi-user support
|
* Parallel decoding with multi-user support
|
||||||
* Continuous batching
|
* Continuous batching
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue