diff --git a/examples/server/tests/features/server.feature b/examples/server/tests/features/server.feature index 87a0516f7..d6894ae5f 100644 --- a/examples/server/tests/features/server.feature +++ b/examples/server/tests/features/server.feature @@ -14,12 +14,12 @@ Feature: llama.cpp server @llama.cpp Scenario Outline: Completion Given a completion request with maximum tokens - Then tokens are predicted + Then tokens are predicted Examples: Prompts - | prompt | n_predict | predicted_n | - | I believe the meaning of life is | 128 | 128 | - | Write a joke about AI | 512 | 512 | + | prompt | n_predict | + | I believe the meaning of life is | 128 | + | Write a joke about AI | 512 | @llama.cpp Scenario Outline: OAI Compatibility @@ -29,12 +29,12 @@ Feature: llama.cpp server And max tokens to predict And streaming is Given an OAI compatible chat completions request - Then tokens are predicted + Then tokens are predicted Examples: Prompts - | model | system_prompt | user_prompt | max_tokens | enable_streaming | predicted_n | - | llama-2 | You are ChatGPT. | Say hello. | 64 | false | 64 | - | codellama70b | You are a coding assistant. | Write the fibonacci function in c++. | 512 | true | 512 | + | model | system_prompt | user_prompt | max_tokens | enable_streaming | + | llama-2 | You are ChatGPT. | Say hello. | 64 | false | + | codellama70b | You are a coding assistant. | Write the fibonacci function in c++. | 512 | true | @llama.cpp Scenario: Multi users