Python (Pre-compiled CFFI module for CPU and CUDA)
This commit is contained in:
parent
081fe431aa
commit
7e492b3e0e
1 changed files with 1 additions and 0 deletions
|
@ -115,6 +115,7 @@ Typically finetunes of the base models below are supported as well.
|
||||||
**Bindings:**
|
**Bindings:**
|
||||||
|
|
||||||
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
||||||
|
- Python (Pre-compiled CFFI module for CPU and CUDA): [tangledgroup/llama-cpp-cffi](https://github.com/tangledgroup/llama-cpp-cffi)
|
||||||
- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
|
- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
|
||||||
- Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
|
- Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
|
||||||
- JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
|
- JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue