server: tests: use ngxson llama_xs_q4.bin
This commit is contained in:
parent
30aa323fb9
commit
fe9866a52d
2 changed files with 3 additions and 3 deletions
4
.github/workflows/server-test.yml
vendored
4
.github/workflows/server-test.yml
vendored
|
@ -40,12 +40,12 @@ jobs:
|
|||
- name: Download test model
|
||||
id: download_model
|
||||
run: |
|
||||
./scripts/hf.sh --repo TheBloke/Tinyllama-2-1b-miniguanaco-GGUF --file tinyllama-2-1b-miniguanaco.Q2_K.gguf
|
||||
./scripts/hf.sh --repo ngxson/dummy-llama --file llama_xs_q4.bin
|
||||
|
||||
- name: Server Integration Tests
|
||||
id: server_integration_test
|
||||
run: |
|
||||
cd examples/server/tests
|
||||
./tests.sh ../../../tinyllama-2-1b-miniguanaco.Q2_K.gguf
|
||||
./tests.sh ../../../llama_xs_q4.bin
|
||||
|
||||
|
||||
|
|
|
@ -7,5 +7,5 @@ Functional server tests suite.
|
|||
|
||||
### Run tests
|
||||
1. Build the server
|
||||
2. download a GGUF model: `../../../scripts/hf.sh --repo TheBloke/Tinyllama-2-1b-miniguanaco-GGUF --file tinyllama-2-1b-miniguanaco.Q2_K.gguf`
|
||||
2. download a GGUF model: `../../../scripts/hf.sh --repo ngxson/dummy-llama --file llama_xs_q4.bin`
|
||||
3. Start the test: `./tests.sh tinyllama-2-1b-miniguanaco.Q2_K.gguf -ngl 23 --log-disable`
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue