* llama: llama_split_prefix fix strncpy does not include string termination common: llama_load_model_from_url: - fix header name case sensitive - support downloading additional split in parallel - hide password in url * common: EOL EOF * common: remove redundant LLAMA_CURL_MAX_PATH_LENGTH definition * common: change max url max length * common: minor comment * server: support HF URL options * llama: llama_model_loader fix log * common: use a constant for max url length * common: clean up curl if file cannot be loaded in gguf * server: tests: add split tests, and HF options params * common: move llama_download_hide_password_in_url inside llama_download_file as a lambda * server: tests: enable back Release test on PR * spacing Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * spacing Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * spacing Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
		
			
				
	
	
		
			102 lines
		
	
	
	
		
			2.7 KiB
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
			
		
		
	
	
			102 lines
		
	
	
	
		
			2.7 KiB
		
	
	
	
		
			Gherkin
		
	
	
	
	
	
| @llama.cpp
 | |
| @parallel
 | |
| Feature: Parallel
 | |
| 
 | |
|   Background: Server startup
 | |
|     Given a server listening on localhost:8080
 | |
|     And   a model file tinyllamas/split/stories15M-00001-of-00003.gguf from HF repo ggml-org/models
 | |
|     And   a model file test-model-00001-of-00003.gguf
 | |
|     And   42 as server seed
 | |
|     And   128 as batch size
 | |
|     And   256 KV cache size
 | |
|     And   2 slots
 | |
|     And   continuous batching
 | |
|     Then  the server is starting
 | |
|     Then  the server is healthy
 | |
| 
 | |
|   Scenario Outline: Multi users completion
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long story about AI.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another very long music lyrics.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     Given concurrent completion requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     And  all slots are idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | n_predict |
 | |
|       | 128       |
 | |
| 
 | |
|   Scenario Outline: Multi users OAI completions compatibility
 | |
|     Given a system prompt You are a writer.
 | |
|     And   a model tinyllama-2
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long book.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another a poem.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     And streaming is <streaming>
 | |
|     Given concurrent OAI completions requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | streaming | n_predict |
 | |
|       | disabled  | 128       |
 | |
|       | enabled   | 64        |
 | |
| 
 | |
|   Scenario Outline: Multi users OAI completions compatibility no v1
 | |
|     Given a system prompt You are a writer.
 | |
|     And   a model tinyllama-2
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long book.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another a poem.
 | |
|       """
 | |
|     And <n_predict> max tokens to predict
 | |
|     And streaming is <streaming>
 | |
|     Given concurrent OAI completions requests no v1
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted with <n_predict> tokens
 | |
|     Examples:
 | |
|       | streaming | n_predict |
 | |
|       | disabled  | 128       |
 | |
|       | enabled   | 64        |
 | |
| 
 | |
| 
 | |
|   Scenario:  Multi users with total number of tokens to predict exceeds the KV Cache size #3969
 | |
|     Given a prompt:
 | |
|       """
 | |
|       Write a very long story about AI.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write another very long music lyrics.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write a very long poem.
 | |
|       """
 | |
|     And a prompt:
 | |
|       """
 | |
|       Write a very long joke.
 | |
|       """
 | |
|     And 128 max tokens to predict
 | |
|     Given concurrent completion requests
 | |
|     Then the server is busy
 | |
|     Then the server is idle
 | |
|     Then all prompts are predicted
 |