readme: add llama-swap to Infrastructure
This commit is contained in:
parent
61037d7e6e
commit
b46e8ec78c
1 changed files with 1 additions and 1 deletions
|
@ -192,7 +192,6 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo
|
|||
- [crashr/gppm](https://github.com/crashr/gppm) – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
|
||||
- [gpustack/gguf-parser](https://github.com/gpustack/gguf-parser-go/tree/main/cmd/gguf-parser) - review/check the GGUF file and estimate the memory usage
|
||||
- [Styled Lines](https://marketplace.unity.com/packages/tools/generative-ai/styled-lines-llama-cpp-model-292902) (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
|
||||
- [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy for automatic model switching with llama-server
|
||||
|
||||
</details>
|
||||
|
||||
|
@ -202,6 +201,7 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo
|
|||
- [Paddler](https://github.com/distantmagic/paddler) - Stateful load balancer custom-tailored for llama.cpp
|
||||
- [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs
|
||||
- [llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
|
||||
- [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server
|
||||
|
||||
</details>
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue