This commit is contained in:
Xuan Son Nguyen 2024-10-24 15:59:49 +02:00
parent 07381f7d97
commit c34ab08a16

View file

@ -25,7 +25,7 @@ Feature: llama.cpp server
And an infill input prefix "#include <cstdio>\n#include \"llama.h\"\n\nint main() {\n int n_threads = llama_"
And an infill input suffix "}\n"
And an infill request with no api error
Then 64 tokens are predicted matching Lily|was|so|excited
Then 64 tokens are predicted matching One|day|she|saw|big|scary|bird
Scenario: Infill with input_extra
Given a prompt "Complete this"