* llama : add enum for supported chat templates * use "built-in" instead of "supported" * arg: print list of built-in templates * fix test * update server README |
||
---|---|---|
.. | ||
llama-cpp.h | ||
llama.h |
* llama : add enum for supported chat templates * use "built-in" instead of "supported" * arg: print list of built-in templates * fix test * update server README |
||
---|---|---|
.. | ||
llama-cpp.h | ||
llama.h |