In the RamaLama project we've been extensively using podman. We've also been using docker. Both work resonably well with llama.cpp . Highlighting this in the docmumentation Signed-off-by: Eric Curtin <ecurtin@redhat.com> |
||
---|---|---|
.. | ||
backend | ||
development | ||
android.md | ||
build.md | ||
containers.md | ||
cuda-fedora.md | ||
install.md | ||
llguidance.md |