In the RamaLama project we've been extensively using podman. We've also been using docker. Both work resonably well with llama.cpp . Highlighting this in the docmumentation Signed-off-by: Eric Curtin <ecurtin@redhat.com>