llama : add Qwen support (#4281)
* enable qwen to llama.cpp * llama : do not GPU split bias tensors --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
parent
880f57973b
commit
37c746d687
5 changed files with 372 additions and 9 deletions
1
prompts/chat-with-qwen.txt
Normal file
1
prompts/chat-with-qwen.txt
Normal file
|
@ -0,0 +1 @@
|
|||
You are a helpful assistant.
|
Loading…
Add table
Add a link
Reference in a new issue